You are on page 1of 14

 

TEXTOS - INGLES II
2018

TEXT 1: New Transparent Metal Could Make Smartphones Cheaper. By Nathaniel Scharping

As smartphones get smaller, cheaper and faster, one essential component remains costly: the screen.

Almost 90 percent of smartphone touchscreens utilize a rare and expensive compound called indium
tin oxide, which has kept the price of such screens high. Now, researchers at Pennsylvania State
University have developed a new material, called strontium vanadate, that shares the transparent and
conductive properties of indium tin oxide at a fraction of the cost.

The researchers detailed their findings in an article published earlier this month in the journal Nature
Materials. They crafted a transparent metal composed of strontium and vanadium with an unusual
configuration of electrons that allows light to pass through while retaining the electrically conductive
properties of metals.

A New Way To Look At Screens

The researchers see smartphone screens, which need to be electrically conductive and transparent, as
the most immediate application of their discovery. Indium tin oxide possesses those integral properties
but its cost comes in around $750 per kilogram. As a result, when you shell out several hundred
dollars for a new smartphone, roughly 40 percent of the cost is tied up in the screen. Both strontium
and vanadium sell for just $25 or less per kilogram, according to the researchers. In addition,
researchers produced the compound in a film only 10 nanometers thick, making it perfect for
touchscreens.

Typically, metals share their electrons freely, which allows them to move throughout the structure
uninhibited, much like gaseous molecules. This gives metals their distinctive properties, such as
malleability and conductivity. The electrons in strontium vanadate, a so-called correlated metal,
behave more like a liquid than a gas, moving slower and interacting with each other in curious ways.

Electrons in strontium vanadate molecules exhibit stronger forms of electrostatic interaction — the
forces acting between positively and negatively charged articles. These forces slow down the electrons
and cause them to interact in complex ways, according to the researchers. The end result is a metal that
retains its conductivity, but is less reflective when light is shined on it, making it transparent. This
combination of properties makes it perfect for use in smartphone screens. The researchers also see
applications for their compound in a new form of solar cells, as well as smart windows and television
screens.

Transparent metal sounds like an oxymoron, but you could one day be reading this story through it.
 

New Transparent Metal Could Make Smartphones Cheaper - D-brief. (2017). [online] D-brief.
Available at: http://blogs.discovermagazine.com/d-brief/2015/12/24/new-transparent-metal-could-
make-smartphones-cheaper/#.WAuYxo8rLIU [Accessed 19 Feb. 2017].

TEXT 2: Researchers want to use hardware to fight computer viruses

Dmitry Ponomarev, professor of computer science at Binghamton University, State University of New
York. Credit: Jonathan Cohen/Binghamton University

Fighting computer viruses isn't just for software anymore. Binghamton University researchers will use
a grant from the National Science Foundation to study how hardware can help protect computers too.

"The impact will potentially be felt in all computing domains, from mobile to clouds," said Dmitry
Ponomarev, professor of computer science at Binghamton University, State University of New York.
Ponomarev is the principal investigator of a project titled "Practical Hardware-Assisted Always-On
Malware Detection."
More than 317 million pieces of new malware -- computer viruses, spyware, and other malicious
programs -- were created in 2014 alone, according to work done by Internet security teams at
Symantec and Verizon. Malware is growing in complexity, with crimes such as digital extortion (a
hacker steals files or locks a computer and demands a ransom for decryption keys) becoming large
avenues of cyber attack.
"This project holds the promise of significantly impacting an area of critical national need to help
secure systems against the expanding threats of malware," said Ponomarev. "[It is] a new approach to
improve the effectiveness of malware detection and to allow systems to be protected continuously
without requiring the large resource investment needed by software monitors."
Countering threats has traditionally been left solely to software programs, but Binghamton researchers
want to modify a computer's central processing unit (CPU) chip -- essentially, the machine's brain --
by adding logic to check for anomalies while running a program like Microsoft Word. If an anomaly is
spotted, the hardware will alert more robust software programs to check out the problem. The
hardware won't be right about suspicious activity 100 percent of the time, but since the hardware is
acting as a lookout at a post that has never been monitored before, it will improve the overall
effectiveness and efficiency of malware detection.
"The modified microprocessor will have the ability to detect malware as programs execute by
analyzing the execution statistics over a window of execution," said Ponomarev. "Since the hardware
detector is not 100-percent accurate, the alarm will trigger the execution of a heavy-weight software
detector to carefully inspect suspicious programs. The software detector will make the final decision.
The hardware guides the operation of the software; without the hardware the software will be too slow
 

to work on all programs all the time."


The modified CPU will use low complexity machine learning -- the ability to learn without being
explicitly programmed -- to classify malware from normal programs, which is Yu's primary area of
expertise.
"The detector is, essentially, like a canary in a coal mine to warn software programs when there is a
problem," said Ponomarev. "The hardware detector is fast, but is less flexible and comprehensive. The
hardware detector's role is to find suspicious behavior and better direct the efforts of the software."
Much of the work -- including exploration of the trade-offs of design complexity, detection accuracy,
performance and power consumption -- will be done in collaboration with former Binghamton
professor Nael Abu-Ghazaleh, who moved on to the University of California-Riverside in 2014.
Lei Yu, associate professor of computer science at Binghamton University, is a co-principal
investigator of the grant.
Grant funding will support graduate students that will work on the project both in Binghamton and
California, conference travel and the investigation itself. The three-year grant is for $275,000.

New Transparent Metal Could Make Smartphones Cheaper - D-brief. (2017). [online] D-brief.
Available at: http://blogs.discovermagazine.com/d-brief/2015/12/24/new-transparent-metal-could-
make-smartphones-cheaper/#.WAuYxo8rLIU [Accessed 19 Feb. 2017].

TEXT 3: Encryption method takes authentication to a new level, improves privacy protection

VTT Technical Research Centre of Finland has developed new kinds of encryption methods for
improving the privacy protection of consumers to enable safer, more reliable and easier-to-use user
authentication than current systems allow.

The method combines safety, usability and privacy protection, when, until now, implementing all
three at the same time has been a challenge.
"Our method protects, for example, the user's biometric data or typing style," says Senior Scientist
Kimmo Halunen.
In biometric authentication, the risk is that a person's permanent biometric identifiers, which cannot be
changed, leak out of the database. VTT's method stores data in the database in an encrypted form and
all comparisons between measuring results and the database are conducted using encrypted messages
so there is no need to open any biometric data at this stage of the process.
VTT integrates new kind of encryption methods, such as homomorphic cryptography and secure
exchange of cryptographic keys, to known measuring methods of typing styles.
The traditional authentication based on passwords has proved to be weak, since users mostly select
weak passwords, and hackers often succeed in stealing quite large password databases. Recently,
companies such as Dropbox and Yahoo have fallen prey to such data breaches.
In addition, new types of user environments, such as smart devices, cars, and home appliances, create
challenges for user authentication with the help of passwords.
VTT is now looking for a partner for further processing and commercialisation of this method, which
 

could be available to consumers within a year or two.

Encryption method takes authentication to a new level, improves privacy protection. (2017). [online]
ScienceDaily. Available at: https://www.sciencedaily.com/releases/2016/09/160929082204.htm
[Accessed 19 Feb. 2017].

TEXT 4: The 'Hybrid Cloud' Dilemma

POST WRITTEN BY John Fruehe

John Fruehe is a Moor Insights & Strategy senior analyst for networking and servers

“Hybrid cloud” is one of the hottest terms in technology today. All vendors have a hybrid cloud
strategy, but like potato salad recipes, no two are alike. Cisco Systems CSCO +0.42%, Dell , Hewlett
Packard Enterprise , IBM IBM -0.42%, Oracle ORCL +1.13%, VMware VMW +0.30% and others all
have different takes on hybrid cloud, and this presents a challenge, especially in multi-cloud/multi-
vendor environments, because few standards exist.

Clouds are a set of pooled computing resources that can be provisioned and orchestrated on the fly in
an automated manner. Amazon Web Services is a great example of a public cloud service, and your
own business may run a private cloud in its datacenter. Similar to virtualization, clouds are an elastic
pool where resources can be added and sliced in any way, all from an interface that exposes the
capability to the end user (self-servicing vs. IT job tickets).

Originally, the hybrid cloud concept was a single cloud where compute spanned both public and
private domains, existing on both sides of the firewall. But from a security and a logistics standpoint,
this was hard to implement. Today “hybrid cloud” is becoming “hybrid cloud environment”, a strategy
where businesses might be running applications in different environments, with multiple cloud
vendors, both public and private. Data and resources are shared across multiple domains (and
providers), but each element only lives in one domain.

Normally compute follows data, but there may be instances where data and compute need to live in
different domains, whether it is for security, latency or other factors. The interconnection between
these disparate elements of the hybrid environment is where businesses struggle, and cloud service
providers step in to join it all together. Here are some basic examples of a hybrid environment:
 

(Source: John Fruehe)

Typically, private cloud might be used for proprietary or differentiated applications, essentially the
company’s “secret sauce” applications that give them a competitive advantage or potentially hold very
secure/confidential information that must live inside their firewall. Private cloud is typically not as
self-serviceable as public, but that is changing. Private clouds also require hardware although
externally hosted options, and pure Opex models are now finding favor. Public cloud tends to be used
for application development, “bursty” workloads or applications that are non-differentiated (like the
typical back office operational and billing applications that work the same for everyone). Both public
and private have their place and long term coexistence will be fundamental for most businesses, thus
the need for standards.

Clouds differ from virtualization and traditional IT infrastructures in the following ways:
 

Most companies are entering a hybrid cloud environment because they are dealing with multiple
clouds, data sources and vendors. Here are some examples of what vendors might refer to as hybrid
cloud environments:

● Running a private cloud in your datacenter while also leveraging public cloud services
like Amazon Web Services or Microsoft Azure
● A private cloud application that integrates an external data feed like meteorlogical or
mapping data from a public source
● Using public cloud analytical tools to analyze your company’s proprietary internal data
that sits within your datacenter (or in a hosted private cloud)
● An Internet of Things (IoT) private cloud application in your datacenter using public
cloud services as endpoint gateways for collection of telemetry data
● Using “bursting” to push private cloud apps to a provider when traffic explodes
With many vendors approaching hybrid cloud differently, there need to be some standards, common
methodology and lexicon to help businesses navigate this area. Multi-cloud and multi-vendor are
becoming the preferred strategies for most companies; interconnection, policy adherence and common
management will need to be in place for these strategies to succeed.

The Open Networking User Group (ONUG) is actually working on this challenge, creating a Hybrid
Cloud Framework that will enable vendors to not only get onto the same page in how they address
hybrid clouds, but also solve one of the biggest customer challenges being faced today. At the last
ONUG meeting someone posed the question, “Is cloud just the next generation of proprietary lock-
in?” A compelling question to be sure, primarily because most believe that they should be able to
move an app from one cloud to another—but few (if any) have ever accomplished that. A common
framework would go a long way towards helping businesses work with hosters, brokers and cloud
technology providers. The working group will be focused on defining the standards around security,
contracts, technical architectures and more, helping to put together something similar to the Rosetta
Stone of hybrid cloud, enabling businesses and providers to all work on the same page. Just as
standardization helped the server business grow rapidly, some standardization in the hybrid cloud
space could make it easier for businesses to make the move into the cloud, gaining more efficiency
and flexibility. By not being locked in, a business can leverage clouds without worrying about the
downstream complications.

If you are going to be in New York on October 24-25 it would be worth your time to attend the ONUG
Fall 2016 event to learn more about how hybrid cloud will impact your business.

Disclosure: My firm, Moor Insights & Strategy, like all research and analyst firms, provides or has
provided research, analysis, advising, and/or consulting to many high-tech companies in the industry,
including Cisco Systems, Dell, Hewlett Packard Enterprise, IBM and Microsoft which were cited in
this article. I do not hold any equity positions with any companies cited in this column.
 

John Fruehe (2016). The 'Hybrid Cloud' Dilemma. Retrieved October 4th, 2016, from:
www.forbes.com/sites/moorinsights/2016/10/04/the-hybrid-cloud-dilemma/#e5f2327e1d9b.

TEXT 5: E-Commerce Is Changing, And So Are The Business Opportunities It Creates

By Neel Murthy

Even conservative estimates put e-commerce growth at 16 percent growth every year in the U.S., or
doubling every five years. While Amazon, eBay EBAY +0.64% and the countless brick-and-mortars
with an online presence still hold a lot of the market share, there are still quite a few business
opportunities out there to break into the market.

Here’s some insight into recent market trends that I’ve gleaned while working with some very
knowledgeable entrepreneurs:

1. The sharing/rental economy remains in vogue. Sites like RentTheRunway and AirBnB
are redefining their respective markets. AirBnB started with a need for cheap
accommodations during travel, especially around conferences, and evolved into a
massive marketplace that has redefined short and even long-term access to living space.
This took a lot of time and struggle; at one point, the founders sold Obama O’s cereal to
keep the company afloat.
2. Prime for all. ShopRunner caught onto the fact that customers were having a
phenomenal experience when using Amazon Prime (in fact, consumer surveys show
upwards of 43% of consumers will associate a good delivery experience with the e-
commerce company, even if it’s not in their control). Therefore, they were able to pitch
all of Amazon’s competitors to offer 2-day shipping through them and essentially create
a competitor to the Prime user experience (Amazon vs. the rest, but Amazon will still
probably win…).
3. Merging online and offline. Google GOOGL +0.47% recently launched their shopping
express offering, giving you same day delivery for free. Why? Because brick-and-mortar
stores are being eroded by e-commerce and Google found an opportunity to provide
offline services and speeds with an online interface and user experience. Only time will
tell if it’s sustainable. 
4. Niche solutions for our increasingly busy lives. Many people love Apple AAPL +0.26%,
but everyone hates the wait time for the Genius Bar. Enter a company called iCracked,
which offers an asynchronous repair of iPhones and other Mac products. 
E-Commerce Is Growing, And So Are The Problems It Creates

You can start by solving some of the frictions e-commerce creates -- that's what inspired us to get
started. We realized e-comm was growing rapidly, but couldn’t see how the end user actually
 

benefited from the experience when he or she was not around to receive the packages.

That turned out to be a huge opportunity. In urban markets, up to 40% of deliveries are missed at least
once. Before us, you could:

● Get your packages delivered to work. It usually works, but beware of mail room delays,
nosy neighbors and having to lug the package home after work.
● Forward the call. If you live in an apartment, sometimes you can have your buzzer
forward the call to your cell phone so you can unlock the gate for the delivery person.
Ringo is a paid service that will manage that for you, but it's tricky.
● Get your packages delivered to a corner grocer. They tend to be trustworthy folks and
will happily receive your packages since you’ll probably buy something when you go to
pick it up.
● Get your packages held at UPS or the Post Office. Hours are terrible and you need
different solutions for different carriers.
Since none of those solutions worked for everyone (or for that matter, worked for us), we at Swapbox
wanted to fix this problem. We had personal motivation, a clear pain point, and eventually an effective
solution.

Identifying New E-Commerce Opportunities

What’s left to be solved in e-commerce? A lot. Here are some of my thoughts:

1. Need to make returns? Tough. Along with increased online shopping comes an
increased need for easy returns. There are a few high-end sites that will let you do
returns for free, but that’s not the norm. Not even Amazon has that policy (though
maybe one day they’ll send a drone to pick it up!). But if you get a membership to a site
like Return Saver, which allows you to pay a membership fee and return as much as you
want through FedEx FDX +0.18% ground (with some caveats), you don’t have to worry
about it. 
2. The fit problem. Clothing and accessories are a huge part of e-commerce, but ordering
apparel online has one huge disadvantage to in-store purchases…you can’t try it on. One
company, MTailor, took a creative approach to this problem and decided to come up
with computer vision solution that uses an app to get your measurements
algorithmically. They purport to be 20% more accurate than a tailor.
3. Delivery solutions for other necessary products. I still get my medicine from a
pharmacy, but in today’s day and age, I shouldn’t have to. While there are a lot of
solutions tied to specific pharmacies, I would gladly pay for a service that can be used
by any provider. The difficulty here is around getting distribution licenses and dealing
with state by state regulations around legal drugs.
There’s plenty more out there. The magic of e-commerce is really in creatively alleviating points of
 

friction. It’s up to you to explore.

Neel Murthy is the CEO of Swapbox, the easiest way to send and receive packages.

Neel Murthy (2015). E-Commerce Is Changing, And So Are The Business Opportunities It Creates.
Retrieved January 26th, 2015, from: www.forbes.com/sites/theyec/2015/01/26/e-commerce-is-
changing-and-so-are-the-business-opportunities-it-creates/#9112cf016181.

TEXT 6: Human-Like Neural Networks Make Computers Better Conversationalists

By Ben Thomas

HAL 9000, depicted as a glowing red “eye,” was the frighteningly charismatic computer protagonist in
Stanley Kubrick’s 1968 movie “2001 Space Odyssey.” (Credit: Screengrab from YouTube)

If you’ve ever tried to hold a conversation with a chatbot like CleverBot, you know how quickly the
conversation turns to nonsense, no matter how hard you try to keep it together.

But now, a research team led by Bruno Golosio, assistant professor of applied physics at Università di
Sassari in Italy, has taken a significant step toward improving human-to-computer conversation.
Golosio and colleagues built an artificial neural network, called ANNABELL, that aims to emulate the
large-scale structure of human working memory in the brain — and its ability to hold a conversation is
eerily human-like.

Natural Language Processing

Researchers have been trying to design software that can make sense of human language, and respond
coherently, since the 1940s. The field is known as natural language processing (NLP), and although
amateurs and professionals enter their best NLP programs into competitions every year, the past seven
decades still haven’t produced a single NLP program that allows computers to consistently fool
 

questioners into thinking they’re human.

NLP has attracted a wide variety of approaches over the years, and linguists, computer scientists and
cognitive scientists have focused on designing so-called symbolic architectures, or software programs
that store units of speech as symbols. It’s an approach that requires a lot of top-down management.

A Different School

Another school of thought, the “connectionist approach,” holds that it’s more effective to process
language via artificial neural networks (ANNs). These computerized systems begin as blank slates,
and then they learn to associate certain speech patterns with clusters of interconnected processing
units. This open-ended structure enables ANNs to build connections on the fly, with very little direct
supervision — in much the same way a human brain does.

The crucial distinction between the two approaches is that symbolic architectures require specific rules
in order to make decisions, while ANNs aren’t as beholden to rigid structures. Instead of checking
whether an answer is right or wrong, ANNs choose the answer that’s most likely to be right. And when
it comes to natural language processing, this approach is much more versatile, and better at crafting
human-sounding answers.

ANNABELL is a cognitive architecture model made up of artificial neurons, which learn to


communicate using human language starting from a blank slate. (Credit: Bruno Golosio)

Inspired the successes of earlier ANNs, Golosio and his team engineered a brand-new type of ANN
known as ANNABELL (Artificial Neural Network with Adaptive Behavior Exploited for Language
Learning). The team designed ANNABELL to be able to pick up language by building a system of
interconnected associations from scratch, in the same way a human infant does. To give ANNABELL
the right tools for the job, Golosio’s team designed their network around a very specific model of
human-style memory.
 

A Working Model

Memory is generally divided into short-term and long-term storage. Short-term memories are easy to
retrieve and easy to lose, while long-term memories take longer to form, but stick around.

Many researchers also add a third category, working memory, which is sometimes described as your
“memory of the present moment.” Have you ever asked someone, “What was that you just said?” and
started to repeat the part of the sentence you caught — only to realize with surprise that you somehow
remembered the whole sentence, and could repeat it verbatim? That’s your working memory system in
action.

And as Golosio and his team knew, an ANN designed around a multi-component working memory
model could be a powerful tool for processing and creating human-like communication.

“For example, if someone asks you, ‘what is your favorite movie?’” Golosio explains, “you can focus
your attention on the word ‘movie,’ and use that word as a cue for retrieving information from long-
term memory into working memory, like when you type a keyword into Google.”

And similar to using Google, each of the “search results” in your brain’s working memory contains
links to stashes of more detailed information about the topic. Golosio’s team hoped to emulate this
search-and-link functionality in an ANN, which was an approach, they hoped, might take
ANNABELL to a new level of human-likeness.

Correct and Coherent

ANNABELL’s building blocks are artificial neurons simulated inside a powerful computer. Instead of
trying to simulate the millions of chemical interactions that go on inside a real neuron every second,
the computer simply calculates the likelihood that each neuron will fire, based on the inputs it receives
from the other simulated neurons in the network. As in a biological brain, digital neurons that fire
together wire together; and that ability to fine-tune the strength of neural connections (and thus, the
likelihood that a certain neuron’s firing will trigger certain other neurons to fire) gives ANNABELL
the power to learn new associations.
 

(Credit: Golden Shrimp/Shutterstock)

So far, that description fits any neural network, but Golosio and his team took ANNABELL a step
further. They structured ANNABELL’s large-scale neural connectivity in a way that simulates verbal
components of human working memory. This means ANNABELL can focus, or “listen,” to groups of
words, associate them with other words and phrases, explore possible ways of combining words and
receive “rewards” for answering questions correctly.

Once ANNABELL’s neural structure was in place, Golosio and the team fed the system huge
databases of words and sentences: descriptions of relationships between people, between parts of the
body and between animals and their categories. They also included sample dialogues between a
mother and child and a text-based virtual house.

Making Small Talk

Then researchers asked ANNABELL questions about what she’d learned, and the results were
striking. ANNABELL correctly answered 82.4 percent of questions related to the people dataset, 85.3
percent of those related to the parts of the body dataset, and 95.3 percent of those related to the
categorization dataset. What’s more, in natural conversation, ANNABELL comes across as
remarkably human-like, especially when compared with other current-generation NLP software. The
results appeared Wednesday in the journal PLOS ONE.

While even ANNABELL is still a long way away from passing for human, the system serves as proof-
of-concept for an intriguing idea: that it’s possible to start from a blank slate, and teach a computer to
have coherent conversations about potentially unlimited topics.

In the immediate future, Golosio and his team plan to upload ANNABELL into a robot, which can
experience the world firsthand, and learn to communicate about those experiences.

That may mean tomorrow’s generation of chatbots will be not only coherent, but able to talk about
experiences they’ve actually had in the real world.

Ben Thomas (2015). Human-Like Neural Networks Make Computers Better Conversationalists.
Retrieved November 11th, 2015, from: blogs.discovermagazine.com/crux/2015/11/11/computer-
conversation-artificial-neurons/#.WKkROVWLTIV.

TEXT 7: No GPS, no problem: Next-generation navigation


 

Simulation results for a unmanned drone flying over downtown Los Angeles showing the true
trajectory (red line), from GPS only (yellow line), and GPS aided with cellular signals (blue line).

Credit: ASPIN Laboratory at UC Riverside

A team of researchers at the University of California, Riverside has developed a highly reliable and
accurate navigation system that exploits existing environmental signals such as cellular and Wi-Fi,
rather than the Global Positioning System (GPS). The technology can be used as a standalone
alternative to GPS, or complement current GPS-based systems to enable highly reliable, consistent,
and tamper-proof navigation. The technology could be used to develop navigation systems that meet
the stringent requirements of fully autonomous vehicles, such as driverless cars and unmanned drones.

Led by Zak Kassas, assistant professor of electrical and computer engineering in UCR's Bourns
College of Engineering, the team presented its research at the 2016 Institute of Navigation Global
Navigation Satellite System Conference (ION GNSS+), in Portland, Ore., in September. The two
studies, "Signals of Opportunity Aided Inertial Navigation" and "Performance Characterization of
Positioning in LTE Systems," both won best paper presentation awards.
Most navigation systems in cars and portable electronics use the space-based Global Navigation
Satellite System (GNSS), which includes the U.S. system GPS, Russian system GLONASS, European
system Galileo, and Chinese system Beidou. For precision technologies, such as aerospace and
missiles, navigation systems typically combine GPS with a high-quality on-board Inertial Navigation
System (INS), which delivers a high level of short-term accuracy but eventually drifts when it loses
touch with external signals.
Despite advances in this technology, current GPS/INS systems will not meet the demands of future
autonomous vehicles for several reasons: First, GPS signals alone are extremely weak and unusable in
certain environments like deep canyons; second, GPS signals are susceptible to intentional and
unintentional jamming and interference; and third, civilian GPS signals are unencrypted,
unauthenticated, and specified in publicly available documents, making them spoofable (i.e.,
hackable).
Current trends in autonomous vehicle navigation systems therefore rely not only on GPS/INS, but a
suite of other sensor-based technologies such as cameras, lasers, and sonar.
"By adding more and more sensors, researchers are throwing 'everything but the kitchen sink' to
prepare autonomous vehicle navigation systems for the inevitable scenario that GPS signals become
 

unavailable. We took a different approach, which is to exploit signals that are already out there in the
environment," Kassas said.
Instead of adding more internal sensors, Kassas and his team in UCR's Autonomous Systems
Perception, Intelligence, and Navigation (ASPIN) Laboratory have been developing autonomous
vehicles that could tap into the hundreds of signals around us at any point in time, like cellular, radio,
television, Wi-Fi, and other satellite signals.
In the research presented at the ION GNSS+ Conference, Kassas' team showcased ongoing research
that exploits these existing communications signals, called "signals of opportunity (SOP)" for
navigation. The system can be used by itself, or, more likely, to supplement INS data in the event that
GPS fails. The team's end-to-end research approach includes theoretical analysis of SOPs in the
environment, building specialized software-defined radios (SDRs) that will extract relevant timing and
positioning information from SOPs, developing practical navigation algorithms, and finally testing the
system on ground vehicles and unmanned drones.
"Autonomous vehicles will inevitably result in a socio-cultural revolution. My team is addressing the
challenges associated with realizing practical, cost-effective, and trustworthy autonomous vehicles.
Our overarching goal is to get these vehicles to operate with no human-in-the loop for prolonged
periods of time, performing missions such as search, rescue, surveillance, mapping, farming,
firefighting, package delivery, and transportation," Kassas said.

University of California - Riverside. (2016, October 13). No GPS, no problem: Next-generation


navigation. ScienceDaily. Retrieved February 19, 2017 from
www.sciencedaily.com/releases/2016/10/161013150039.htm

You might also like