You are on page 1of 26

 

TEXTOS - INGLES I
2018
TEXT 1: Ten typical jobs graduates can do in IT

The IT industry is host to a whole raft of job titles. To help you, we've deciphered ten of the top
IT job titles you might encounter when searching for graduate jobs.
 

To make sure you find the right graduate IT job with the right employer, always check job
descriptions carefully when applying so that you understand the skills and responsibilities of the
role.
 

The IT industry is well known for its wide range of job titles which can make it hard for graduates
interested in this sector to pin down exactly what people do.
 

As a job-hunting graduate, chances are you’ve got a lot on your plate so we’ve decoded some of the
more common job titles you may come across during your graduate job search. However, pay close
attention to the job description of particular positions you apply for. Make note of the key skills and
competences wanted, and ask questions at interviews to find out more specific information about what
the role will involve day to day. This will ensure that you find the right graduate job in IT with the right
employer.

Graduate job 1: Software engineer

Also known as: application programmer, software architect, system programmer/engineer.

This job in brief: The work of a software engineer typically includes designing and programming
system-level software: operating systems, database systems, embedded systems and so on. They
understand how both software and hardware function. The work can involve talking to clients and
colleagues to assess and define what solution or system is needed, which means there’s a lot of
interaction as well as full-on technical work. Software engineers are often found in electronics and
telecommunications companies. A computing, software engineering or related degree is needed.

Key skills include:

● analysis,
● logical thinking,
● teamwork
● attention to detail.

Graduate job 2: Systems analyst


 

Also known as: product specialist, systems engineer, solutions specialist, technical designer.

This job in brief: Systems analysts investigate and analyse business problems and then design
information systems that provide a feasible solution, typically in response to requests from their business
or a customer. They gather requirements and identify the costs and the time needed to implement the
project. The job needs a mix of business and technical knowledge, and a good understanding of people.
It’s a role for analyst programmers to move into and typically requires a few years’ experience from
graduation.

Key skills include:

● ability to extract and analyse information,


● good communication
● persuasion
● sensitivity.

Graduate job 3: Business analyst

Also known as: business architect, enterprise-wide information specialist.

This job in brief: Business analysts are true midfielders, equally happy talking with technology people,
business managers and end users. They identify opportunities for improvement to processes and business
operations using information technology. The role is project based and begins with analysing a
customer’s needs, gathering and documenting requirements and creating a project plan to design the
resulting technology solution. Business analysts need technology understanding, but don’t necessarily
need a technical degree.

Key skills include:

● communication
● presentation
● facilitation
● project management
● problem solving.

Graduate job 4: Technical support

Also known as: helpdesk support, IT support analyst, operations analyst.

This job in brief: These are the professional troubleshooters of the IT world. Many technical support
specialists work for hardware manufacturers and suppliers solving the problems of business customers
or consumers, but many work for end-user companies supporting, monitoring and maintaining
workplace technology and responding to users’ requests for help. Some lines of support require
professionals with specific experience and knowledge, but tech support can also be a good way into the
industry for graduates.

Key skills include:


● wide ranging tech knowledge
● problem solving
 

● communication/listening
● patience
● diplomacy.

Graduate job 5: Network engineer

Also known as: hardware engineer, network designer.

This job in brief: Network engineering is one of the more technically demanding IT jobs. Broadly
speaking the role involves setting up, administering, maintaining and upgrading communication
systems, local area networks and wide area networks for an organisation. Network engineers are also
responsible for security, data storage and disaster recovery strategies. It is a highly technical role and
you’ll gather a hoard of specialist technical certifications as you progress. A telecoms or computer
science-related degree is needed.

Key skills include:

● specialist network knowledge


● communication
● planning
● analysis
● problem solving.

Graduate job 6: Technical consultant

Also known as: IT consultant, application specialist, enterprise-wide information specialist.

This job in brief: The term ‘consultant’ can be a tagline for many IT jobs, but typically technical
consultants provide technical expertise to, and develop and implement IT systems for, external clients.
They can be involved at any or all stages of the project lifecycle: pitching for a contract; refining a
specification with the client team; designing the system; managing part or all of the project; after sales
support... or even developing the code. A technical degree is preferred, but not always necessary.

Key skills include:

● communication
● presentation
● technical and business understanding
● project management
● teamwork.

Graduate job 7: Technical sales

Also known as: sales manager, account manager, sales executive.

This job in brief: Technical sales may be one of the least hands-on technical roles, but it still requires an
understanding of how IT is used in business. You may sell hardware, or extol the business benefits of
whole systems or services. Day to day, the job could involve phone calls, meetings, conferences and
drafting proposals. There will be targets to meet and commission when you reach them. A technology
 

degree isn’t necessarily essential, but you will need to have a thorough technical understanding of the
product you sell.

Key skills include:

● product knowledge
● persuasion
● interpersonal skills
● drive
● mobility
● business awareness.

Graduate job 8: Project manager

Also known as: product planner, project leader, master scheduler.

This job in brief: Project managers organise people, time and resources to make sure information
technology projects meet stated requirements and are completed on time and on budget. They may
manage a whole project from start to finish or manage part of a larger ‘programme’. It isn’t an entry-
level role: project managers have to be pretty clued up. This requires experience and a good foundation
of technology and soft skills, which are essential for working with tech development teams and higher
level business managers.
Key skills include:

● organisation
● problem solving
● communication
● clear thinking
● ability to stay calm under pressure.

Graduate job 9: Web developer

Also known as: web designer, web producer, multimedia architect, internet engineer.

This job in brief: Web development is a broad term and covers everything to do with building websites
and all the infrastructure that sits behind them. The job is still viewed as the trendy side of IT years after
it first emerged. These days web development is pretty technical and involves some hardcore
programming as well as the more creative side of designing the user interfaces of new websites. The
role can be found in organisations large and small.

Key skills include:

● basic understanding of web technologies (client side, server side and databases)
● analytical thinking
● problem solving
● creativity.

Graduate job 10: Software tester


 

Also known as: test analyst, software quality assurance tester, QA analyst.

This job in brief: Bugs can have a massive impact on the productivity and reputation of an IT firm.
Testers try to anticipate all the ways an application or system might be used and how it could fail. They
don’t necessarily program but they do need a good understanding of code. Testers prepare test scripts
and macros, and analyse results, which are fed back to the project leader so that fixes can be made.
Testers can also be involved at the early stages of projects in order to anticipate pitfalls before work
begins. You can potentially get to a high level as a tester.
Key skills include:

● attention to detail
● creativity
● organisation
● analytical and investigative thinking
● communication.
Ten typical jobs graduates can do in IT. (2017). [online] TARGETjobs. Available at:
https://targetjobs.co.uk/career-sectors/it-and-technology/advice/286189-ten-typical-jobs-graduates-
can-do-in-it [Accessed 19 Feb. 2017].

TEXT 2: Computers make the world smaller and smarter

The ability of tiny computing devices to control complex operations has transformed the way many tasks
are performed, ranging from scientific research to producing consumer products. Tiny 'computers on a
chip' are used in medical equipment, home appliances, cars and toys. Workers use handheld computing
devices to collect data at a customer site, to generate forms, to control inventory, and to serve as desktop
organisers.

Not only is computing equipment getting smaller, it is getting more sophisticated. Computers are part
of many machines and devices that once required continual human supervision and control. Today,
computers in security systems result in safer environments, computers in cars improve energy efficiency,
and computers in phones provide features such as call forwarding, call monitoring, and call answering.

These smart machines are designed to take over some of the basic tasks previously performed by people;
by so doing, they make life a little easier and a little more pleasant. Smart cards store vital information
such as health records, drivers' licenses, bank balances, and so on. Smart phones, cars, and appliances
with built in computers can be programmed to better meet individual needs. A smart house has a built-
in monitoring system that can turn lights on and off, open and close windows, operate the oven, and
more.

With small computing devices available for performing smart tasks like cooking dinner, programming
the DVD recorder, and controlling the flow of information in an organization, people are able to spend
more time doing what they often do best - being creative. Computers can help people work more
creatively.
 

Multimedia systems are known for their educational and entertainment value, which we call
'edutainment'. Multimedia combines text with sound, video, animation, and graphics, which greatly
enhances the interaction between user and machine and can make information more interesting and
appealing to people. Expert systems software enables computers to 'think' like experts. Medical
diagnosis expert systems, for example, can help doctors pinpoint a patient's illness, suggest further tests,
and prescribe appropriate drugs.

Connectivity enables computers and software that might otherwise be incompatible to communicate and
to share resources. Now that computers are proliferating in many areas and networks are available for
people to access data and communicate with others, personal computers are becoming interpersonal
PCs. They have the potential to significantly improve the way we relate to each other. Many people
today telecommute -that is, use their computers to stay in touch with the office while they are working
at home. With the proper tools, hospital staff can get a diagnosis from a medical expert hundreds or
thousands of miles away. Similarly, the disabled can communicate more effectively with others using
computers.

Distance learning and videoconferencing are concepts made possible with the use of an electronic
classroom or boardroom accessible to people in remote locations. Vast databases of information are
currently available to users of the Internet, all of whom can send mail messages to each other. The
information superhighway is designed to significantly expand this interactive connectivity so that people
all over the world will have free access to all these resources.

People power is critical to ensuring that hardware, software, and connectivity are effectively integrated
in a socially responsible way. People - computer users and computer professionals - are the ones who
will decide which hardware, software, and networks endure and how great an impact they will have on
our lives. Ultimately people power must be exercised to ensure that computers are used not only
efficiently but in a socially responsible way.

Glendinning, Eric H.; McEwan, John (2006). Oxford English for Information Technology , Second
Edition, Oxford University Press. Oxford: Oxford University Press.

TEXT 3: Six Important Stages in the Data Processing Cycle


 

Much of data management is essentially about extracting useful information from data. To do this, data
must go through a data mining process to be able to get meaning out of it. There is a wide range of
approaches, tools and techniques to do this, and it is important to start with the most basic understanding
of processing data.

What is Data Processing?

Data processing is simply the conversion of raw data to meaningful information through a process. Data
is manipulated to produce results that lead to a resolution of a problem or improvement of an existing
situation. Similar to a production process, it follows a cycle where inputs (raw data) are fed to a process
(computer systems, software, etc.) to produce output (information and insights).

Generally, organizations employ computer systems to carry out a series of operations on the data in
order to present, interpret, or obtain information. The process includes activities like data entry,
summary, calculation, storage, etc. Useful and informative output is presented in various appropriate
forms such as diagrams, reports, graphics, etc.

Stages of the Data Processing Cycle

1) Collection is the first stage of the cycle, and is very crucial, since the quality of data collected will
impact heavily on the output. The collection process needs to ensure that the data gathered are both
defined and accurate, so that subsequent decisions based on the findings are valid. This stage provides
both the baseline from which to measure, and a target on what to improve.

Some types of data collection include census (data collection about everything in a group or statistical
population), sample survey (collection method that includes only part of the total population), and
administrative by-product (data collection is a byproduct of an organization’s day-to-day operations).

2) Preparation is the manipulation of data into a form suitable for further analysis and processing. Raw
data cannot be processed and must be checked for accuracy. Preparation is about constructing a dataset
from one or more data sources to be used for further exploration and processing. Analyzing data that
has not been carefully screened for problems can produce highly misleading results that are heavily
dependent on the quality of data prepared.

3) Input is the task where verified data is coded or converted into machine readable form so that it can
be processed through a computer. Data entry is done through the use of a keyboard, digitizer, scanner,
or data entry from an existing source. This time-consuming process requires speed and accuracy. Most
data need to follow a formal and strict syntax since a great deal of processing power is required to
breakdown the complex data at this stage. Due to the costs, many businesses are resorting to outsource
this stage.

4) Processing is when the data is subjected to various means and methods of manipulation, the point
where a computer program is being executed, and it contains the program code and its current activity.
The process may be made up of multiple threads of execution that simultaneously execute instructions,
depending on the operating system. While a computer program is a passive collection of instructions, a
process is the actual execution of those instructions. Many software programs are available for
processing large volumes of data within very short periods.
 

5) Output and interpretation is the stage where processed information is now transmitted to the user.
Output is presented to users in various report formats like printed report, audio, video, or on monitor.
Output need to be interpreted so that it can provide meaningful information that will guide future
decisions of the company.

6) Storage is the last stage in the data processing cycle, where data, instruction and information are held
for future use. The importance of this cycle is that it allows quick access and retrieval of the processed
information, allowing it to be passed on to the next stage directly, when needed. Every computer uses
storage to hold system and application software.

The Data Processing Cycle is a series of steps carried out to extract information from raw data. Although
each step must be taken in order, the order is cyclic. The output and storage stage can lead to the repeat
of the data collection stage, resulting in another cycle of data processing. The cycle provides a view on
how the data travels and transforms from collection to interpretation, and ultimately, used in effective
business decisions.

About The Author: Phillip Harris is data management enthusiast and he has written numerous blogs
and articles on effective document management and data processing.

Rudo, P. (2017). 6 Important Stages in the Data Processing Cycle. [online] Enterprise Features.
Available at: http://www.enterprisefeatures.com/6-important-stages-in-the-data-processing-cycle/
[Accessed 19 Feb. 2017].

TEXT 4: Cloud Backup as a Service and DRaaS: What’s the Difference?

Cloud Backup as a Service (BaaS) means that you have turned over all or portions of your backup and
recovery process to a cloud-based backup provider. It’s not a simple as putting one foot in front of
another: you need to do due diligence on your provider, there are WAN speed requirements, different
cloud infrastructures have their own advantages and disadvantages, and recovery will have to toe the
RTO/RPO line. But all in all, cloud BaaS is simple to understand.

What is not so clear is how Disaster Recovery as a service (DRaaS) figures into this picture. While
cloud-based backup services manage data and application protection and recovery, DRaaS provides
continuous processing during the recovery process. Both services work together to ensure the fastest
possible recovery from a disaster.

What is Cloud Backup as a Service?


BaaS is cloud-based backup and recovery. This is one of the most basic connections between on-premise
and cloud. IT is responsible for setting service levels based on the level of service they contract, such as
setting backup windows and assigning RPO and RTO service levels by data priority.

DRaaS (sometimes called RaaS) is failover processing to the cloud. It builds a hot standby site in the
 

cloud for uninterrupted production processing, and continues to run until IT repairs the on-premise
environment and issues failback order. Failover may be automated based on threshold events or manual
based on alerts. The failover site can be located on the public cloud or any DRaaS vendor owned cloud;
even on-premise in a private cloud. (If you need to keep your data behind your firewall, you can still
take advantage of DRaaS if your service provider supports on-premise deployments.)

(Note that DRaaS is not strictly confined to virtual environments, nor is it necessarily cloud-based. For
example, some vendors place failover appliances with cloud connectivity onsite, while some offer
failover services for physical and virtual servers in a managed remote site. However, the majority of
DRaaS offerings address the most common usage of managed DR services: failing over a virtualized
environment to the cloud.)

When you invest in DRaaS you will choose the level of service that you need and can afford. Your
biggest expense will be duplicating your virtual infrastructure. This does not mean that if you have a
100% virtualized environment your service provider must duplicate the entire virtual environment in the
cloud. Choose your Tier 1 applications for failover because the cost for running two active environments
is not negligible. The failover system duplicates the protected VM environment, which continuously
replicates VM images to the cloud for seamless failover.

When you choose a DRaaS service, keep these points in mind:


● Most DRaaS providers offer failover to a virtual data center in the cloud. Cloud-based
discovery as a service runs virtualized servers in the cloud, which are independent of a
physical data center disaster. 
● Local staff expertise. Your IT staff should be familiar with virtualization. Choosing
DRaaS does not absolve you from hiring virtualization admins since you must still work
with the provider. 
● Sufficient WAN performance. You will need to invest in sufficient connectivity to back
up, restore and failover as per service level agreements. Your vendor should provide WAN
acceleration features to help. 
● Due diligence. A secure cloud environment in a certified data center is a minimal start.
You will also want to understand the advantages and disadvantages of different cloud
infrastructures including the public cloud, vendor-provided clouds, and private/on-premise
clouds. 
Remember that making decisions around DRaaS will take time and money. Even with a managed
service, the ultimate responsibility for your applications lie with you. You are paying for insurance that
you hope you never have to use.

However, business continuity – not to mention peace of mind – are not small considerations. And should
there be a data center disaster, rather than having a DRaaS in place will be invaluable to recovering
applications and data before suffering serious business harm.

Cloud Backup as a Service and DRaaS: What’s the Difference? (n.d.). Retrieved from
http://www.enterprisefeatures.com/cloud-backup-draas-difference/
 

TEXT 5: Can You Teach Creativity to a Computer?

From Picasso’s “The Young Ladies of Avignon” to Munch’s “The Scream,” what was it about some
paintings that arrested people’s attention upon viewing them, that cemented them in the canon of art
history as iconic works?

In many cases, it’s because the artist incorporated a technique, form or style that had never been used
before. They exhibited a creative and innovative flair that would go on to be mimicked by artists for
years to come.

Throughout human history, experts have often highlighted these artistic innovations, using them to judge
a painting’s relative worth. But can a painting’s level of creativity be quantified by Artificial Intelligence
(AI)?

At Rutgers’ Art and Artificial Intelligence Laboratory, my colleagues and I proposed a novel algorithm
that assessed the creativity of any given painting, while taking into account the painting’s context within
the scope of art history.

In the end, we found that, when introduced with a large collection of works, the algorithm can
successfully highlight paintings that art historians consider masterpieces of the medium.

The results show that humans are no longer the only judges of creativity. Computers can perform the
same task – and may even be more objective.

Defining Creativity

Of course, the algorithm depended on addressing a central question: how do you define – and measure
– creativity?

There is a historically long and ongoing debate about how to define creativity. We can describe a person
(a poet or a CEO), a product (a sculpture or a novel) or an idea as being “creative.”

In our work, we focused on the creativity of products. In doing so, we used the most common definition
for creativity, which emphasizes the originality of the product, along with its lasting influence.

These criteria resonate with Kant’s definition of artistic genius, which emphasizes two conditions: being
original and “exemplary.”

They’re also consistent with contemporary definitions, such as Margaret A. Boden’s widely accepted
notion of Historical Creativity (H-Creativity) and Personal/Psychological Creativity (P-Creativity). The
former assesses the novelty and utility of the work with respect to scope of human history, while the
latter evaluates the novelty of ideas with respect to its creator.
 

A graph highlighting certain paintings deemed most creative by the algorithm. Credit: Ahmed
Elgammal

Building the Algorithm

Using computer vision, we built a network of paintings from the 15th to 20th centuries. Using this web
(or network) of paintings, we were able to make inferences about the originality and influence of each
individual work.

Through a series of mathematical transformations, we showed that the problem of quantifying creativity
could be reduced to a variant of network centrality problems – a class of algorithms that are widely used
in the analysis of social interaction, epidemic analysis and web searches. For example, when you search
the web using Google, Google uses an algorithm of this type to navigate the vast network of pages to
identify the individual pages that are most relevant to your search.

Any algorithm’s output depends on its input and parameter settings. In our case, the input was what the
algorithm saw in the paintings: color, texture, use of perspective and subject matter. Our parameter
setting was the definition of creativity: originality and lasting influence.

The algorithm made its conclusions without any encoded knowledge about art or art history, and made
its assessments of paintings strictly by using visual analysis and considering their dates.

Innovation Identified
 

The Scream. Credit: wikimedia Commons

When we ran an analysis of 1,700 paintings, there were several notable findings. For example, the
algorithm scored the creativity of Edvard Munch’s “The Scream” (1893) much higher than its late 19th-
century counterparts. This, of course, makes sense: it’s been deemed one of the most outstanding
Expressionist paintings, and is one of the most-reproduced paintings of the 20th century.

The algorithm also gave Picasso’s “Ladies of Avignon” (1907) the highest creativity score of all the
paintings it analyzed between 1904 and 1911. This is in line with the thinking of art historians, who
have indicated that the painting’s flat picture plane and its application of Primitivism made it a highly
innovative work of art – a direct precursor to Picasso’s Cubist style.

The algorithm pointed to several of Kazimir Malevich’s first Suprematism paintings that appeared in
1915 (such as “Red Square“) as highly creative as well. Its style was an outlier in a period then-
dominated by Cubism. For the period between 1916 and 1945, the majority of the top-scoring paintings
were by Piet Mondrian and Georgia O’Keeffe.

Of course, the algorithm didn’t always coincide with the general consensus among art historians.

For example, the algorithm gave a much higher score to Domenico Ghirlandaio’s “Last Supper” (1476)
than to Leonardo da Vinci’s eponymous masterpiece, which appeared about 20 years later. The
algorithm favored da Vinci’s “St. John the Baptist” (1515) over his other religious paintings that it
analyzed. Interestingly, da Vinci’s “Mona Lisa” didn’t score highly by the algorithm.

Picasso’s “Ladies of Avignon.” Credit: Wally Gobetz via Flickr

Test of Time

Given the aforementioned departures from the consensus of art historians (notably, the algorithm’s
evaluation of da Vinci’s works), how do we know that the algorithm generally worked?

As a test, we conducted what we called “time machine experiments,” in which we changed the date of
 

an artwork to some point in the past or in the future, and recomputed their creativity scores.

We found that paintings from the Impressionist, Post-Impressionist, Expressionist and Cubism
movements saw significant gains in their creativity scores when moved back to around AD 1600. In
contrast, Neoclassical paintings did not gain much when moved back to 1600, which is understandable,
because Neoclassicism is considered a revival of the Renaissance.

Meanwhile, paintings from Renaissance and Baroque styles experienced losses in their creativity scores
when moved forward to AD 1900.

We don’t want our research to be perceived as a potential replacement for art historians, nor do we hold
the opinion that computers are a better determinant of a work’s value than a set of human eyes.

Rather, we’re motivated by Artificial Intelligence (AI). The ultimate goal of research in AI is to make
machines that have perceptual, cognitive and intellectual abilities similar to those of humans.

We believe that judging creativity is a challenging task that combines these three abilities, and our results
are an important breakthrough: proof that a machine can perceive, visually analyze and consider
paintings much like humans can.

This article was originally published on The Conversation.

Can You Teach Creativity to a Computer?. (2017). [online] The Crux. Available at:
http://blogs.discovermagazine.com/crux/2015/07/30/creativity-computer/#.WKkIBFWLTIV
[Accessed 19 Feb. 2017].

TEXT 6: Brain-Inspired Machines

WHAT, EXACTLY, ARE WE LOOKING FOR?


Mohamed Zahran | March 14, 2016

In the computing community, people look at the brain as the ultimate computer. Brain-inspired machines
are believed to be more efficient than the traditional Von Neumann computing paradigm, which has
been the dominant computing model since the dawn of computing. More recently, however, there have
been many claims made regarding attempts to build brain inspired machines. But one question, in
particular, needs to be thoroughly considered before we embark on creating these so called brain-
inspired machines: Inspired by what, exactly? Do we want to build a full replica of the human brain,
assuming we have the required technology?

Within the relevant research communities, the human brain and computers affect and interact with each
other in three different ways. Therefore, the first model would be to build a computer that can simulate
 

all the neurons in the brain and their interconnections. If we can create the technology, the resulting
machine would be a faithful brain simulator—a tool that can be used by neuroscientists, for example.
The second effort would be to use implants to increase the capability of the brain. As an example of this,
a team from Duke University used brain implants to allow mice to sense infrared light [1]. Although this
represents integrating a sensor with the brain, we can expect more sophisticated integration in the
future—a trend that has the potential to increase the ways computers and the brain interact together to
get the best of the two worlds. This can be thought of as part of the vast field of human–computer
interaction. Finally, the third model would be to study the brain’s characteristics, as far as our knowledge
reaches, and decide which characteristics we want to implement into our machines to build better
computers [2].

But before we dig deeper, let us agree on a set of goals one should expect to find in the ultimate
computing machine.

THE ULTIMATE COMPUTER

When the first electronic computers were built more than seven decades ago, correctness was the main
goal. As more applications were implemented, performance/speed became a necessity. Power efficiency
was added to the list with the widespread use of battery-operated devices (battery life) and the spread of
data centers and supercomputers (electricity costs). Transistors got smaller, following Moore’s law with
the enabling technology of Dennard scaling, but they became less reliable (see “Moore’s Law and
Dennard Scaling”). So reliability was also added to the list, raising the question of how to build reliable
machines from unreliable components [3]. Security joined the list because of the interconnected world
in which we live.

Moore's Law and Dennard Scaling

We now have five items in this wish list of goals: correctness, performance, power efficiency, reliability,
and security. The first two are crucial for any computer to be useful. The other three are the result of
technological constraints and functional requirements. Consequently, the ultimate computer is one that
can fulfill the correctness and performance requirements and adequately address those of power
efficiency, reliability, and security.

This raises other important questions: Can the way the brain works inspire us to envision ways to deal
with this list? Will understanding how the brain works lead us to the conclusion that we must grow out
of the current Von Neumann architecture to stand any chance of achieving these rather conflicting goals?
Can it inspire us to find ways to bring the Von Neumann architecture closer to making the items on the
wish list all possible? Or could it even push us to reconsider this wish list altogether?
 

THE VON NEUMANN MODEL

The huge majority of computers follow the Von Neumann model, where the machine fetches
instructions from the memory and executes them on data (possibly brought from memory) in the central
processing unit. The Von Neumann model has undergone numerous enhancements over decades of use,
but its core architecture remains fundamentally the same.

The brain is not a Von Neumann machine. However, the brain can still inspire us to deal with several
items of the wish list.

● The brain is an extensively parallel machine. Within the Von Neumann model, we have
already moved to multicore and manycore processors. The number of cores keeps increasing.
However, the degree by which parallelism is exploited in these parallel Von Neumann
machines depends on the application type, the expertise of the programmer, the compiler,
the operating system, and the hardware resources.
● The brain is decentralized. This is not yet the case in a Von Neumann model or in the whole
design of the computer system. Decentralization has an effect on reliability (plasticity in the
brain) and performance. Even though we have several cores working in a parallel computer
system, they are all under the control of the operating system that runs on some cores. A
computer needs to be able to detect a failure and then move the task to another part to
continue execution. This has been implemented, to some degree, with what are called thread-
migration techniques. But can we implement these on the whole computer system (storage,
memory, and so forth)?
● Current computers are precise and have a finite memory. The brain has a virtually infinite
 

memory that is approximate. If we find a new memory technology that provides a huge
amount of storage relative to current state-of-the-art memory [static random-access memory
(RAM), dynamic RAM, phase-change memory, spin-transfer torque RAM,
magnetoresistive RAM, and so on] but is not 100% precise, can we design software that
makes use of this memory?

The Von Neumann model puts a lot of restrictions on how much we can learn from the way the brain
works. So how about exploring non- Von Neumann models?

NON-VON NEUMANN MODELS

Von Neumann computers are programmed, but brains are taught. The brain accumulates experience. If
we remove the restrictions of a Von Neumann model, can we get a more brain-like machine? We need
to keep in mind a couple of issues here.

First, we do not fully know, at least at this point, how the brain works exactly. We have many pieces of
the puzzle figured out, but many others are still missing. The second issue is that we may not need an
actual replica of the brain for computers to be useful. Computers were invented to extend our abilities
and help us do more, just like all other machines and tools invented by humanity. We don’t need a
machine with free will—or do we? The answer is debatable, and this is assuming we can build such a
machine!

But what many would agree on, or at least debate less, is that we need machines that do not require
detailed programs. We need machines that can accumulate experience. We need machines that can
continue to work in the presence of a hardware failure.

The Von Neumann model is perfect for many tasks, and, given the billions of dollars invested in software
and hardware for this model, there is no practical chance of moving immediately and fully to a non-Von
Neumann model. A good compromise may be to have a hybrid system, for example [4], similar to the
way digital systems and analog systems are used together. For instance, a Von Neumann machine
executes a task; gathers information about performance, power efficiency, and so forth; and submits that
information to a non-Von Neumann machine that learns from this information and, in the next execution
of the Von-Neumann machine, it is reconfigured to best execute this piece of software. This is just one
scenario, but the potential in this direction is very high.

What Did We Learn from Artificial Neural Networks?


 
 

Once we mention computers and the brain, one of the first terms that comes to mind is the artificial
neural network (ANN). ANNs are considered a very useful tool despite being an overly simplified model
of the brain. They have been used for decades with demonstrable success in a number of areas. However,
the brain is far more sophisticated than an ANN. For instance, the neurons fire at different rates
depending on the context. Hence, there is information not only in the weights but also in the rates. This
is not implemented in traditional neural networks. There is some processing in the connections among
neurons. Can we make use of that to build better machines?

THE STORAGE CHALLENGE AND OTHER FUNDAMENTAL QUESTIONS

One of the main bottlenecks for performance in traditional computers is storage. The whole storage
hierarchy suffers from low speed (as we go down the hierarchy from the different levels of caches to
disks), high power consumption, and bandwidth problems (once we go off chip). There are several
reasons for this poor performance relative to the processor. First, the interconnection between the
processor and the storage devices is slow (the traveling speed of the signals as well as the bandwidth of
the interconnection). Second, in building those storage devices, capacity is taking the front seat relative
to speed. Can we build a new memory system based on how the brain stores information— or at least
on how we think the brain stores information? And how far do we have to deviate from the traditional
Von Neumann model to be able to achieve that? There are several hypotheses about how the brain stores
information (for example, the strength of the interconnection among neurons).

The question of consciousness is discussed in neuroscience, cognitive science, and philosophy. Now, it
is time to discuss it in computer design. Consciousness is not the same as being selfaware. It is the
awareness of being self-aware. Do we need computers to be conscious? What do we gain from that?
Computers are already self-aware with all of their sensors, cameras, and so on. But what will we gain if
we take this one step further and make them aware they are self-aware? In my opinion, we may not gain
much, assuming we can make it.

There’s another fundamental question: Do we need machines with emotions? We are talking about
several steps beyond affective computing [5]. My answer for this is different than my answer for the
consciousness question. Here, computers with emotions could be very useful—for example, in helping
elderly people.

A third question relates creativity. If we can build machines that can learn, then we give them a degree
of creativity. Human beings are creative in problem solving and also in defining problems. Do we want
machines to be creative only in solving a problem? Or do we also want machines to be able to identify
problems? What if a human and a machine have a conflict defining the same problem? Maybe in such
cases we can use humans and machines as a diverse group for brainstorming and problem solving (the
 

wisdom of the crowds).

REFERENCES

1. E. E. Thomson, R. Carra, and M. A. L. Nicolelis, “Perceiving invisible light through a


somatosensory cortical prosthesis,” Nat. Commun., vol. 4, Feb. 2013.
2. S. Navlakha and Z. Bar-Joseph, “Distributed information processing in biological and
computational systems,” Commun. ACM, vol. 58, no. 1, pp. 94–102, Dec. 2014.
3. S. Borkar, “Designing reliable systems from unreliable components: The challenges of
transistor variability and degradation,” IEEE Micro, vol. 25, no. 6, pp. 10–16, Nov. 2005.
4. G. Banavar, “Watson and the era of cognitive computing,” in Proc. 20th Int. Conf.
Architectural Support for Programming Languages and Operating Systems (ASPLOS’15),
Mar. 2015.
5. [Online]. Available: http://affect.media.mit.edu/ (accessed 30 Jan. 2016).

(n.d.). IEEE Pulse Magazine. Brain-Inspired Machines - IEEE PULSE. Retrieved from
http://pulse.embs.org/march-2016/brain-inspired-machines/?trendmd-shared=1

TEXT 7: New microchip demonstrates efficiency and scalable design

Increased power and slashed energy consumption for data centers

This is an annotated CAD tool layout of the Princeton Piton Processor showing 25 cores.

Credit: Princeton University

Princeton University researchers have built a new computer chip that promises to boost performance
 

of data centers that lie at the core of online services from email to social media.

Data centers -- essentially giant warehouses packed with computer servers -- enable cloud-based
services, such as Gmail and Facebook, as well as store the staggeringly voluminous content available
via the internet. Surprisingly, the computer chips at the hearts of the biggest servers that route and
process information often differ little from the chips in smaller servers or everyday personal
computers.

By designing their chip specifically for massive computing systems, the Princeton researchers say they
can substantially increase processing speed while slashing energy needs. The chip architecture is
scalable; designs can be built that go from a dozen processing units (called cores) to several thousand.
Also, the architecture enables thousands of chips to be connected together into a single system
containing millions of cores. Called Piton, after the metal spikes driven by rock climbers into
mountainsides to aid in their ascent, it is designed to scale.

"With Piton, we really sat down and rethought computer architecture in order to build a chip
specifically for data centers and the cloud," said David Wentzlaff, an assistant professor of electrical
engineering and associated faculty in the Department of Computer Science at Princeton University.
"The chip we've made is among the largest chips ever built in academia and it shows how servers
could run far more efficiently and cheaply."

Wentzlaff's graduate student, Michael McKeown, will give a presentation about the Piton project
Tuesday, Aug. 23, at Hot Chips, a symposium on high performance chips in Cupertino, California.
The unveiling of the chip is a culmination of years of effort by Wentzlaff and his students. Mohammad
Shahrad, a graduate student in Wentzlaff's Princeton Parallel Group said that creating "a physical piece
of hardware in an academic setting is a rare and very special opportunity for computer architects."

Other Princeton researchers involved in the project since its 2013 inception are Yaosheng Fu, Tri
Nguyen, Yanqi Zhou, Jonathan Balkind, Alexey Lavrov, Matthew Matl, Xiaohua Liang, and Samuel
Payne, who is now at NVIDIA. The Princeton team designed the Piton chip, which was manufactured
for the research team by IBM. Primary funding for the project has come from the National Science
Foundation, the Defense Advanced Research Projects Agency, and the Air Force Office of Scientific
Research.

The current version of the Piton chip measures six by six millimeters. The chip has over 460 million
transistors, each of which are as small as 32 nanometers -- too small to be seen by anything but an
electron microscope. The bulk of these transistors are contained in 25 cores, the independent
processors that carry out the instructions in a computer program. Most personal computer chips have
four or eight cores. In general, more cores mean faster processing times, so long as software ably
exploits the hardware's available cores to run operations in parallel. Therefore, computer
manufacturers have turned to multi-core chips to squeeze further gains out of conventional approaches
to computer hardware.

In recent years companies and academic institutions have produced chips with many dozens of cores;
but Wentzlaff said the readily scalable architecture of Piton can enable thousands of cores on a single
chip with half a billion cores in the data center.

"What we have with Piton is really a prototype for future commercial server systems that could take
advantage of a tremendous number of cores to speed up processing," said Wentzlaff.

The Piton chip's design focuses on exploiting commonality among programs running simultaneously
on the same chip. One method to do this is called execution drafting. It works very much like the
 

drafting in bicycle racing, when cyclists conserve energy behind a lead rider who cuts through the air,
creating a slipstream.

At a data center, multiple users often run programs that rely on similar operations at the processor
level. The Piton chip's cores can recognize these instances and execute identical instructions
consecutively, so that they flow one after another, like a line of drafting cyclists. Doing so can increase
energy efficiency by about 20 percent compared to a standard core, the researchers said.

A second innovation incorporated into the Piton chip parcels out when competing programs access
computer memory that exists off of the chip. Called a memory traffic shaper, this function acts like a
traffic cop at a busy intersection, considering each program's' needs and adjusting memory requests
and waving them through appropriately so they do not clog the system. This approach can yield an 18
percent performance jump compared to conventional allocation.

The Piton chip also gains efficiency by its management of memory stored on the chip itself. This
memory, known as the cache memory, is the fastest in the computer and used for frequently accessed
information. In most designs, cache memory is shared across all of the chip's cores. But that strategy
can backfire when multiple cores access and modify the cache memory. Piton sidesteps this problem
by assigning areas of the cache and specific cores to dedicated applications. The researchers say the
system can increase efficiency by 29 percent when applied to a 1,024-core architecture. They estimate
that this savings would multiply as the system is deployed across millions of cores in a data center.

The researchers said these improvements could be implemented while keeping costs in line with
current manufacturing standards. To hasten further developments leveraging and extending the Piton
architecture, the Princeton researchers have made its design open source and thus available to the
public and fellow researchers at the OpenPiton website: http://www.openpiton.org

"We're very pleased with all that we've achieved with Piton in an academic setting, where there are far
fewer resources than at large, commercial chipmakers," said Wentzlaff. "We're also happy to give out
our design to the world as open source, which has long been commonplace for software, but is almost
never done for hardware."

Princeton University, Engineering School. (2016, August 22). New microchip demonstrates efficiency
and scalable design: Increased power and slashed energy consumption for data centers. ScienceDaily.
Retrieved February 19, 2017 from www.sciencedaily.com/releases/2016/08/160822181811.htm

TEXT 8: Web Design Vs. Web Development: What’s the Difference? By Lauren Holliday
 

What do you do?

For Patrick Haney, that question is a little more complicated.

“By day, I’m a designer and developer hybrid working on client projects through Hanerino,1 a two-
person, design studio his wife began two years ago,” he says. “We take on a wide range of work
including web design and development, user experience design for apps and mobile as well as many
other design projects.”

When he is not working on contract projects he teaches web development and graphic design classes as
an adjunct instructor at the CDIA.2

In high school, Haney tried “every programming language he could get his hands on,” and so, naturally,
he enrolled as a computer science major in college. It wasn’t long before he realized that he needed a
balance of web development and design and changed majors.

“My first few jobs out of college were development focused, and while I really enjoyed what I was
doing, I felt it was lacking something,” he says. “Eventually, as I began to write more HTML and CSS,
I realized that the design side of the web was really intriguing. The ability to make something work, but
then to make it enjoyable to use, that’s where I found my passion.”

A good web designer must know HTML, CSS, and JavaScript.

Patrick is one of an increasing number of people who consider themselves a designer and developer
hybrid, which makes sense in a world where clients lump web design and web development together as
if they were the same thing.

Joel Oliveira, lead developer at Change Collective,3 says it may be because they only have one point of
 

reference, and that’s what they see on their screen.

“What’s on their screen is ‘designed,’ and they might not understand the mechanics and complex system
under the hood,” Oliveira says. “It’s probably simpler for those people to just pick one and go with it;
just like when you call a tissue a Kleenex or folks in the south who call any and all sodas Coke.”

These “unicorns,” or people who are great designers and developers, are fairly rare and incredibly sought
after in today’s market. The difference in the number of jobs available for designers versus developers
is drastic.

According to a Visual.ly infographic, there are 1,336,300 available jobs for web developers compared
with a meager 200,870 open positions for web designers.4 Not only is the hiring demand a huge
difference, but also the salary difference is stark. The median salary for a web designer is $47,820 while
the median salary for a web developer is $85,430.

Job demand is heavily skewed in favor of web developers.

So why does one get paid so much more than the other if they’re both working together to produce the
same outcome: a beautiful and functional website or an application?

Web designers are architects of the web. They focus on the look and feel of the website; and so, they
should be visual arts experts, who are skilled in color scheming, graphic design and information flow.
Designers are typically more in tune with their right brain hemisphere, utilizing their creativity, intuition
and imagination, to design amazing user experiences.

Development is for left-brained people; Design is better suited for the right-brain.
 

The education requirement of a web designer is debatable. While a degree may not be needed, a full
portfolio of your past work is a must. Of course others would argue that a degree from a university is
just as important. Also, you should be skilled in software such as Adobe Illustrator, Photoshop and
Dreamweaver.

The education required for web designers is less than that of web developers.

Amanda Cheung, lead interaction developer at DockYard,5 works with designers and other developers
to create great experiences on the computer and mobile web. She creates and reviews Cascading Style
Sheets (CSS), ensuring the code is clean, maintainable, user-friendly and responsive.

Cheung actually began as a designer and had a natural progression into web development because like
Haney, she felt something was missing. “I studied fine arts in school, concentrating in painting and
graphic design,” she says. “Once I graduated I started working as a graphic designer, but the job I was
doing wasn’t too fulfilling for me so I took an introduction to web design and web programming classes
at night. I was able to get several freelance gigs, realized how fun it was and made the transition into
web development full-time.”

Kyle Bradshaw is the senior front-end developer at Digitas,6 and he classifies himself as solely a web
developer.

“I went to an engineering school so I enjoy programming. I was never very interested in the design
aspect of it. I lack artistic talent,” he says. “I could always remix other designers’ designs if it came
down to it. I enjoyed the development part more.”

Oliveira is a developer as well. He enjoys how there is always an answer in code and that computer
science and programming filled the proverbial creative void.

“There is always an answer – true or false, 1 or 0, works or crashes, but the means by which you could
get to that answer were limitless,” he says. “How liberating! The space in between fascinated me. How
efficient could I be? How fast? How cute? How confusing? How clear? The journey to finding the
answer to a problem has been the reason I’ve stayed in the field to this day.”

Most developers would agree with Joel because programmers tend to think with the left side of their
brain, which is the logical, linear thinking and technical side.

If web designers are the architects of the web then developers are the builders. Without coders, the plans
would never come to life. They work with designers in making semantic markup languages like XHTML
and CSS and transform static PSDs into interactive working web browser pages. Typically,
 

programmers are skilled in programming languages such as PHP, ASP, Ruby on Rails, Python, HTML,
CSS and more depending on what they specialize in and their experience level. The nice thing about
being a good developer is that since their skills are in such a high demand then any programmer with a
good portfolio can easily get a coding job.

Web developers speak a different language (or more than one) than
front-end designers.

No matter how different or not different being a designer and/or a


developer may be, the two occupations do seem to come with the same
pros and cons.

Flexible work hours and ability to work from anywhere seem to be the biggest perks aside from doing
something these people love to do.

Oliveira says the ability to work from anywhere can also be a huge con.

“Because I have my laptop with me almost all day and night I can and will work. It’s a constant struggle
to be mindful of my work/life balance. I enjoy what I do immensely (obviously a perk), but the
possibility of burnout is a very real thing,” he says.

So which profession do you choose?

Everyone I spoke to recommended not to.

Haney thinks it is important to not think of web design and development as two entirely different entities.

“Without them both, there is no usable web. Whether you’re in charge of both or working within a team
doing one or the other, make sure your process flows in both directions,” he says.

If you don’t know what you want to do and are still deciding between the two, then try working with
Photoshop one day, and Sublime the next. Like Steve Jobs said, “As with all matters of the heart, you’ll
know when you find it. And like any great relationship, it just gets better and better as the years roll on.”
So start looking. Go find it.

SkilledUp For Businesses | SkilledUp. Web Design Vs. Web Development: What's the Difference? |
SkilledUp. Retrieved from http://www.skilledup.com/articles/web-design-vs-web-development-whats-
difference
 

TEXT 9: Five Ways Brands Can Improve Customer Service Via Technology

Steve Olenski, Contributor

Today's customer has increasing demands from businesses. Customer demands are not the problem
though. Companies need to change and modify the way they do business, along with the way they treat
customers, to achieve the level of success they desire.

In short, as customers expect more from companies, businesses and brands need to step up to the plate
to meet those challenges. The good news is technology can help brands achieve this. Here are a few
ways companies can achieve these goals.

1. Provide Mobile Access That Simplifies Their Tasks

Customers want and need information. They want an easy way to get that information and connect with
businesses. A mobile app is an exceptional and often must-have tool for businesses. Customers expect
mobile apps to provide information while also allowing the customer to interact based on their need.
JetBlue’s and JustFly’s apps are good examples of hands-on tools customers can instantly access and
use to get everything they need for traveling. It is all in one location.

2. Take Responsibility For Your Business And Its Actions

Your employee made a mistake. It happens in every company. Or, perhaps the mistake,
miscommunication, or oversight was company-wide. No matter why it happened, you need to
communicate with your audience that you are sorry it did. Social media is an excellent way to apologize
quickly. Many brands use social media as a way to solve customer complaints – especially when those
complaints come in through the platform. For example, Pizza Hut routinely answers customer
complaints nearly instantly on its social media sites.

3. Communicate A Solution

When it comes to ensuring your customer's needs are met, one of the most important things to do is to
say you're sorry. But, then, customers want to see you make it right. When Samsung's Galaxy Note
smartphones caught fire, they explained the problem on all social platforms. They apologized, and they
replaced the phones. When it comes down to it, customers just want to know they can trust a brand to
make it right.

4. Provide The Best Tech Service

When your product fails to deliver or a complication arises from its use, the first thing your customer
wants is for you to solve the problem. Even if it is their lack of understanding how to use it, they need
you to provide exceptional service to just fix it. That is not always a simple solution. In many cases, the
first step to fixing a problem like this is to provide over-the-phone or over-the-internet support services.
Some companies are known for providing exceptional tech service like this.
 

For example, Apple is known for providing quality tech support. Dell's support services are considered
ideal. Best Buy even provides Geek Squad support as a component of a product's purchase.

5. Give Them What They Really Want That No One Else Offers

Google, as hard as they try, may not be able to return the answers to everything. At least not answers
that you can be sure are accurate. For example, math. With two kids in school, and hardly a math whiz,
I can appreciate the value of online help when it comes to this subject. Well like most things today,
there's an app for that.

Another niche student-type service that provides answers within minutes is Studypool; an online
marketplace that is looking to disrupt the online tutoring industry by offering students access to over
20,000 tutors around the world 24-hours a day. The marketplace allows students to post specific
homework questions that Google might not be able to return the answers to, and get a response within
minutes.

As to what the future may bring, earlier this year ZDNet highlighted five technologies via a Forrester
report that touched on technological changes that could impact customer service by the year 2021:

1. Two-way video

2. Augmented and virtual reality

3. Virtual assistants

4. Messaging

5. Connected devices

How about you?

How are you using technology when it comes to customer service?

@steveolenski is a writer who drinks too much coffee and knows a thing or two about marketing.

Steve Olenski (2016). 5 Ways Brands Can Improve Customer Service Via Technology. Retrieved
october 18, 2016, from: www.forbes.com/sites/steveolenski/2016/10/18/5-ways-brands-can-improve-
customer-service-via-technology/#388123921d99.

You might also like