You are on page 1of 54

TECHnews

Interact TODAYS WORLD

April2013|2ndEdition | ALEX ACM SC

LAUNCHES

ITW

For the

th 7

year

R.I.P

Mai Ahmed Emara

Alexandria ACM Student Chapter is dedicating this special edition for her soul

INDEX
Bioengineers Build Open Source Language for Programming Cells Engineers Use Brain Cells to Power Smart Grid

1/4
7 10 14

How to make a computer from a living cell

What happens when one man pinged the whole internet

16
19

A Smarter Algorithm May Cut Energy Use in Data Centres by 35 %

INDEX EXCLUSIVE

ITW13

The Amazing

ITW13 Introductory Article . 21


Castle of the Amazing ITW13 .23

The Miracle man, Ahmed Abd El-Kader interviewed about ITW13.... 35


IEEE AlexSB Chairman, Amr Hassaan, Interviewed about ITW13 39

2/4

INDEX
Researchers Evaluate Bose-Einstein Condensates for Communicating Among Quantum Computers Bioengineers Build Open Source Language for Programming Cells

3/4
38 41 # 43

Security Holes In Smartphone Apps

Grid Computing will Help Creating Printable Solar Cells

44
46

DEEP LEARNING

INDEX
Preventing Misinformation from Spreading through Social Media

4/4
49
51

Android for Archaeology

SAN FRANCISCO The allure of the iPhone was not its brushed metal or shiny touch screen, but the apps that turned it into anything from a flute to a flashlight. Now, Google hopes that apps will do the same thing for Glass, its Internet-connected glasses.
On Monday night, Google released extensive guidelines for software developers who want to build apps for Glass. With those guidelines, it is taking a page from Apples playbook, by being much more restrictive about the glasses than it has been with other products, particularly its Android operating system for phones, and controlling the type of apps that developers build. Analysts said that was largely because Google wanted to introduce the technology to the public slowly, to deal with concerns like privacy. Developers are crucial to the future of Glass, and we are committed to building a thriving software ecosystem for them and for Glass users, Jay Nancarrow, a Google spokesman, said in a statement. To begin, developers cannot sell ads in apps, collect user data for ads, share data with ad companies or distribute apps elsewhere. They cannot charge people to buy apps or virtual goods or services within them. Many developers said they expected Google to eventually allow them to sell apps and ads. But Sarah Rotman Epps, an analyst at Forrester who studies wearable computing, said Google was smart to limit advertising at first. What we find is the more intimate the device, the more intrusive consumers perceive advertising is, she said. Still, she said many consumers had said they would like to interact with brands on Glass in certain ways, like a bank showing a balance while a user is shopping or a hospital sending test results. On Tuesday, Google sold its first glasses for $1,500 to developers who had signed up last year. Some developers said they were disappointed by the limits. It gives them a lot of control over the experience, said Frank Carey, a software developer and computer science graduate student in New Paltz, N.Y. My hope is they make it as open as possible so that we can really test the limits of what this type of device would look like. Mr Carey built an app at a Google hackathon for taking photos of people you meet at cocktail parties and tagging them with their names and details to discretely pull up the information when you see them again. Other developers said it made sense for Google to be more cautious than it was with mobile phones.

because Glass was always in a users field of vision. You dont carry your laptop in the bathroom, but with Glass, youre wearing it, said Chad Sahlhoff, a freelance software developer in San Francisco. Thats a funny issue we havent dealt with as software developers. Mr. Sahlhoff said he wanted to build apps for carpenters so they could see schematics without lifting their eyes from machines, and for drivers to see the speed limit and points of interest without taking their eyes off the road. Just as the iPhone ushered in a new wave of computing on mobile phones, Glass could be the beginning of wearable computing

Glass wearers, using their voices, fingers or by moving their heads, can search the Web, take pictures and view walking directions, for instance. The screen is directly in front of the wearers eye, but in the wearers perception, appears to be a 25-inch high-definition screen eight feet away. The battery generally lasts a day, according to Google. Developers and tech investors have clamored to get their hands on Glass. About 200 developers attended Google-sponsored hackathons to build apps. Three prominent venture capital firms started a partnership to seek start-up pitches from Glass developers. In addition to restricting advertising in apps, Google is also limiting the amount of access app software has to the devices.

becoming mainstream. But the question is whether people are ready to wear computers on their bodies, and to interact with others wearing them. Glass could be the next great platform for app development, like the iPhone, Ms. Epps said. But the variable is whether consumers will want it or not, and that is a real unknown. So far, wearable computing has been confined mostly to industries like health care and the military and to fitness devices like the Nike FuelBand. But as companies like Apple, Samsung and Google build wearable devices, the number shipped in 2016 could grow to 92.5 million, up from 14.3 million in 2011, according to IHS, a business research firm. Google is slowly selling its first devices to people who have signed up in advance to buy them. The company has said it hopes to sell a less expensive and more polished version to consumers by the end of the year.

The apps, which will be called Glassware, will be cloud-based, like Web apps, as opposed to living on the device like cellphone apps. Developers will not be able to change the display or access the sensors on the device. Jake Weisz, who works in I.T. in Chicago, is building tools to rapidly receive and respond to online updates, and said it would be less distracting to see them on Glass. My current situation is that my phone buzzes, I check the notification, and often I barely get the phone put back away before it buzzes again, Mr. Weisz said. With Glass, he said, I can glance upward without stopping what Im doing. So far, the only people who have worn the glasses for extended periods are Google employees and software developers people who are comfortable with cutting-edge technology.

But Google is reminding developers to keep a mainstream audience in mind. It also advised them to make sure apps do not send updates too frequently and to be sure to avoid doing anything consumers do not expect. Be honest about the intention of your application, what you will do on the users behalf, and get their explicit permission before you do it, Google said.

CLAIRE CAIN MILLER


http://bits.blogs.nytimes.com/2013/04/16/goo gle-releases-details-about-glass-for-appdevelopers/

TO POWER SMART GRID

ENGINEERS USE BRAIN CELLS

10

The unmatched ability of the human brain to process and make sense of large amounts of complex data has caught the attention of engineers working in the field of control systems. "The brain is one of the most robust computational platforms that exists," says Ganesh Kumar Venayagamoorthy, Ph.D., director of the Real-Time Power and Intelligent Systems Laboratory at Clemson University. "As power-systems control becomes more and more complex, it makes sense to look to the brain as a model for how to deal with all of the complexity and the uncertainty that exists."

Led by Venayagamoorthy, a team of neuroscientists and engineers is using neurons grown in a dish to control simulated power grids. The researchers hope that studying how neural networks integrate and respond to complex information will inspire new methods for managing the country's ever-changing power supply and demand. In other words, the brainpower behind our future electric power grid might not be what you think.

Power to the people


America's strategy for providing power began in the late 1800s as a number of isolated generating plants serving regional customers.

Over the next 50 years, the electric system was rapidly transformed into an inter-connected "grid" that ensured access to power when equipment failed or during periods of unexpected demand. Today, with nearly 200,000 miles of high-voltage lines connecting over 6,000 power plants, America's power grid has been called the world's largest single machine. Unfortunately, the grid's aging infrastructure wasn't built to handle today's ever-increasing demand. According to the U.S. Department of Energy, the average power generating station in the US was built in the 1960s, using even older technology. Today, the average substation transformer is 42 years old, two years past its expected life span. Another problem is that while the system has a great capacity to produce power, it doesn't actually have a way to store power. This can spell trouble during periods of unexpected high demand, which can result in a massive loss (blackout) or reduction (brownout) in electricity. In 2003, 50 million people in eight states and one Canadian province were left without power when a single transmission line in Ohio was damaged by a tree limb. Tomorrow's power grid will need to be able to anticipate usage and quickly compensate for unexpected need. The "on-demand" power production strategy of our current system also makes it difficult to incorporate renewable sources of energy, such as wind and solar power, which can't be cranked up or down in response to peaks and lulls in power consumption. "In order to get the most out of the different types of renewable energy sources, we need an intelligent grid that can perform real-time dispatch and manage optimally available energy storage systems," says Venayagamoorthy.

A Smarter Electric Power Grid


While technologies such as solar panels, wind turbines and hybrid electric vehicles will help reduce our non-renewable energy consumption, experts believe the development of a "smart" grid, capable of monitoring and controlling the flow of electricity from power plants down to individual appliances, will have the largest impact. According to the Department of Energy, if the current grid were just 5 percent more efficient, the energy savings would be equal to removing 53 million cars from the planet. While a number of strategies have been proposed to optimize grid performance and incorporate intermittent energy sources, the ultimate goal is to create a distributed energy delivery network characterized by a two-way flow of electricity and information. For Venayagamoorthy, looking to the brain for inspiration was a no-brainer. "What we need is a system that can monitor, forecast, plan, learn, make decisions," says Venayagamoorthy. "Ultimately, what we need is a control system that is very brain-like." A leader in the field of learning and memory research, Potter recently pioneered a new method for understanding how the brain integrates and responds to information at the network level. The technique involves growing neurons in a dish containing a grid of electrodes that can both stimulate and record activity. The electrodes connect the neuronal network to a computer, allowing two-way communication between the living and the electronic components. Potter's group has had success with this approach in the past, having shown that living neuronal networks can be made to control computer-simulated animals and simple robots. In the current project, the network is trained to recognize and respond to voltage and speed signals from Venayagamoorthy's power grid simulation. "The goal is to translate the physical and functional changes that occur as living neuronal network learns into mathematical equations, ultimately leading to a more brainlike intelligent control system," says Venayagamoorthy. The purpose is trouble to develop brain-inspired This can spell during periods of computer code, meaning living brain cells unexpected high demand, which can result in won't be part of the final equation. a massive loss (blackout) or reduction (brownout) in electricity. In 2003, 50 million people in eight states and one Canadian province were left without power when a single transmission line in Ohio was damaged by a tree limb. Tomorrow's power grid will need to be able to anticipate usage and quickly compensate for unexpected need. The "on-demand" power production strategy of our current system also makes it difficult to incorporate renewable sources of energy, such as wind and solar power, which can't be cranked up or down in response to peaks and lulls in power consumption. "In order to get the most out of the different Power energy to the people types of renewable sources, we need an intelligent grid that can perform real-time dispatch and manage optimally available energy storage systems," says Venayagamoorthy.

What Would The Brain Do?


Because operates in a system completely Over the the nextbrain 50 years, the electric was different way than traditional computing rapidly transformed into an inter-connected systems, the first step was to try make sense "grid" that ensured access to to power when of how the brain and responds equipment failed integrates or during periods to of data. To do so, Venayagamoorthy enlisted the unexpected demand. expertise of neuroscientist Steve Potter, Today, with nearly 200,000 miles of Ph.D., highdirector of the Laboratory for voltage lines connecting over 6,000 power NeuroEngineering at the Georgia Institute of plants, America's power grid has been called Technology. the world's largest single machine. Unfortunately, the grid's aging infrastructure wasn't built to handle today's ever-increasing demand. According to the U.S. Department of Energy, the average power generating station in the United States was built in the 1960s, using even older technology. Today, the average substation transformer is 42 years old, two years past its expected life span. Another problem is that while the system has a great capacity to produce power, it doesn't actually have a way to store power.

What have we learnt so far ?


The collaboration has already yielded encouraging results. The investigators have successfully "taught" a living neuronal network how to respond to complex data, and have incorporated these findings into simulated versions called bio-inspired artificial neural networks (BIANNS). They are currently using the new and improved BIANNS to control synchronous generators connected to a power system. Venayagamoorthy and his team hope that this work will pave the way for smarter control of our future power grid. This project was supported by NSF's Office of Emerging Frontiers in Research and Innovation (EFRI), which strives to keep the nation at the forefront of engineering research by investing in new and transformative projects.

G. Kumar Venayagamoorthy Ph.D., Clemson University


http://www.nsf.gov/discoveries/disc_summ.jsp?cntn_id=127605&org=NSF

14
Genetic logic gates will enable biologists to program cells for chemical production and disease detection.
If biologists could put computational controls inside living cells, they could program them to sense and report on the presence of cancer, create drugs on site as theyre needed, or dynamically adjust their activities in fermentation tanks used to make drugs and other chemicals. Now researchers at Stanford University have developed a way to make genetic parts that can perform the logic calculations that might someday control such activities. The Stanford researchers genetic logic gate can be used to perform the full complement of digital logic tasks, and it can store information, too. It works by making changes to the cells genome, creating a kind of transcript of the cells activities that can be read out later with a DNA sequencer. The researchers call their invention a transcriptor for its resemblance to the transistor in electronics. We want to make tools to put computers inside any living cell a little bit of data storage, a way to communicate, and logic, says Drew Endy, the bioengineering professor at Stanford who led the work. Timothy Lu, who leads the Synthetic Biology Group at MIT, is working on similar cellular logic tools. You cant deliver a silicon chip into cells inside the body, so you have to build circuits out of DNA and proteins, Lu says. The goal is not to replace computers, but to open up biological applications that conventional computing simply cannot address. Biologists can give cells new functions through traditional genetic engineering, but Endy, Lu, and others working in the field of synthetic biology want to make modular parts that can be combined to build complex systems from the ground up. The cellular logic gates, Endy hopes, will be one key tool to enable this kind of engineering. Cells genetically programmed with a biological AND gate might, for instance, be used to detect and treat cancer, says Endy. If protein A and protein B are presentwhere those proteins are characteristic of, say, breast cancerthen this could trigger the cell to produce protein C, a drug. In the cancer example, says Endy, youd want the cell to respond to low levels of cancer markers (the signal) by producing a large amount of the drug. The case is the same for biological cells designed to detect pollutants in the water supplyideally, theyd generate a very large signal (for example, quantities of bright fluorescent proteins) when they detect a small amount of a pollutant. The transcriptor triggers the production of enzymes that cause alterations in the cells genome. When the production of those enzymes is triggered by the signala protein of interest, for examplethese enzymes will delete or invert a particular stretch of DNA in the genome. Researchers can code the transcriptor to respond to one, or multiple, different such signals. The signal can be amplified because one change in the cells DNA can lead the cell to produce a large amount of the output protein over time. Depending on how the transcriptor is designed, it can act as a different kind of logic gatean AND gate that turns on only in the presence of two proteins, an OR gate thats turned on by one signal or another, and so on. Endy says these gates could be combined into more complex circuits by making the output of one the input for the next. This work is described today in the journal Science.

Depending on how the transcriptor is designed, it can act as a different kind of logic gatean AND gate that turns on only in the presence of two proteins, an OR gate thats turned on by one signal or another, and so on. Endy says these gates could be combined into more complex circuits by making the output of one the input for the next. This work is described today in the journal Science. MITs Lu says cellular circuits like his and Endys, which use enzymes to alter DNA, are admittedly slow. From input to output, it can take a few hours for a cell to respond and change its activity. Other researchers have made faster cellular logic systems that use other kinds of biomoleculesregulatory proteins or RNA, for example. But Lu says these faster systems lack signal amplification and memory. Future cellular circuits are likely to use some combination of different types of gates, Lu says. Christopher Voigt, a biological engineer at MIT, says the next step is to combine genetic logic gates to make integrated circuits capable of more complex functions. We want to make cells that can do real computation, he says

Katherine Bourzac Ph.D., Clemson University


http://www.technologyreview.com/news/512901/how-to-make-a-computer-from-aliving-cell/

16

A home science experiment that probed billions of Internet devices reveals that thousands of industrial and business systems offer remote access to anyone.
You probably havent heard of HD Moore, but up to a few weeks ago every Internet device in the world, perhaps including some in your own home, was contacted roughly three times a day by a stack of computers that sit overheating his spare room. I have a lot of cooling equipment to make sure my house doesnt catch on fire, says Moore, who leads research at computer security company Rapid7. In February last year he decided to carry out a personal census of every device on the Internet as a hobby. This is not my day job; its what I do for fun, he says. Moore has now put that fun on hold. [It] drew quite a lot of complaints, hate mail, and calls from law enforcement, he says. But the data collected has revealed some serious security problems, and exposed some vulnerable business and industrial systems of a kind used to control everything from traffic lights to power infrastructure. Moores census involved regularly sending simple, automated messages to each one of the 3.7 billion IP addresses assigned to devices connected to the Internet around the world (Google, in contrast, collects information offered publicly by websites). Many of the two terabytes (2,000 gigabytes) worth of replies Moore received from 310 million IPs indicated that they came from devices vulnerable to wellknown flaws, or configured in a way that could to let anyone take control of them. On Tuesday, Moore published results on a particularly troubling segment of those vulnerable devices: ones that appear to be used for business and industrial systems. Over 114,000 of those control connections were logged as being on the Internet with known security flaws. Many could be accessed using default passwords and 13,000 offered direct access through a command prompt without a password at all.

Those vulnerable accounts offer attackers significant opportunities, says Moore, including rebooting company servers and IT systems, accessing medical device logs and customer data, and even gaining access to industrial control systems at factories or power infrastructure. Moores latest findings were aided by a similar dataset published by an anonymous hacker last month, gathered by compromising 420,000 pieces of network hardware. The connections Moore was looking for are known as serial servers, used to connect devices to the Internet that dont have that functionality built in. Serial servers act as glue between archaic systems and the networked world, says Moore. [They] are exposing many organizations to attack. Moore doesnt know whether the flaws he has discovered are being exploited yet, but has released details on how companies can scan their systems for the problems he uncovered. Joel Young, chief technology officer of Digi International, manufacturer of many of the unsecured serial servers that Moore found, welcomed the research, saying it had helped his company understand how people were using its products. Some customers that buy and deploy our products didnt follow good security policy or practices, says Young. We have to do more proactive education for customers about security. Young says his company sells a cloud service that can give its products a private, secured connection away from the public Internet. However, he also said that Digi would continue to ship products with default passwords, because it made initial setup smoother, and that makes customers more likely to set their own passwords. I havent found a better way, he says. Billy Rios, a security researcher who works on industrial control systems at security startup company Cylance, says Moores project provides valuable numbers to quantify the scale of a problem that is well-known to experts like himself but underappreciated by companies at risk.

Rios says that in his experience, systems used by more critical facilities such as energy infrastructure are just as likely to be vulnerable to attack as those used for jobs such as controlling doors in a small office. They are using the same systems, he says. Removing serial servers from the public Internet so that they are accessed through a private connection could prevent many of the easiest attacks, says Rios, but attackers could still use various techniques to steal the necessary credentials. The new work adds to other significant findings from Moores unusual hobby. Results he published in January showed that around 50 million printers, games consoles, routers, and networked storage drives are connected to the Internet and easily compromised due to known flaws in a protocol called Universal Plug and Play (UPnP). This protocol allows computers to automatically find printers, but is also built into some security devices, broadband routers, and data storage systems, and could be putting valuable data at risk. Data collected by Moores survey has also helped Rapid7 colleagues identify how a piece of software called FinFisher was used by law enforcement and intelligence agencies to spy on political activists. It also helped unmask the control structure for a long-running campaign called Red October that infiltrated many government systems in Europe. Moore believes the security industry is overlooking some rather serious, and basic, security problems by focusing mostly on the computers used by company employees. It became obvious to me that weve got some much bigger issues, says Moore. There [are] some fundamental problems with how we use the Internet today. He wants to get more people working to patch up the backdoors that are putting companies at risk. However, Moore has no plans to probe the entire Internet again. Large power and Internet bills, and incidents such the Chinese governments Computer Emergency Response Team asking U.S. authorities to stop Moore hacking all their things have convinced him its time to find a new hobby.

However, with plenty of data left to analyze, there will be more to reveal about the true state of online security, says Moore: Were sitting on mountains of new vulnerabilities.

Tom Simonite
http://www.technologyreview.com/news/514066/what-happened-when-one-manpinged-the-whole-internet/

19

New research suggests that data centres could significantly cut their electricity usage simply by storing fewer copies of files, especially videos. For now the work is theoretical, but over the next year, researchers at Alcatel-Lucents Bell Labs and MIT plan to test the idea, with an eye to eventually commercializing the technology. It could be implemented as software within existing facilities. This approach is a very promising way to improve the efficiency of data centres, says Emina Soljanin, a researcher at Bell Labs who participated in the work. It is not a panacea, but it is significant, and there is no particular reason that it couldnt be commercialized fairly quickly. With the new technology, any individual data centre could be expected to save 35 percent in capacity and electricity costsabout $2.8 million a year or $18 million over the lifetime of the centre, says Muriel Mdard, a professor at MITs Research Laboratory of Electronics, who led the work and recently conducted the cost analysis. So-called storage area networks within data centre servers rely on a tremendous amount of redundancy to make sure that downloading videos and other content is a smooth, unbroken experience for consumers. Portions of a given video are stored on different disk drives in a data centre, with each sequential piece cued up and buffered on your computer shortly before its needed. In addition, copies of each portion are stored on different drives, to provide a backup in case any single drive is jammed up.

A single data centre often serves millions of video requests at the same time. The new technology, called network coding, cuts way back on the redundancy without sacrificing the smooth experience. Algorithms transform the data that makes up a video into a series of mathematical functions that can, if needed, be solved not just for that piece of the video, but also for different parts. This provides a form of backup that doesnt rely on keeping complete copies of the data. Software at the data center could simply encode the data as it is stored and decode it as consumers request it. Mdards group previously proposed a similar technique for boosting wireless bandwidth (see A Bandwidth Breakthrough). That technology deals with a different problem: wireless networks waste a lot of bandwidth on back-and-forth traffic to recover dropped portions of a signal, called packets. If mathematical functions describing those packets are sent in place of the packets themselves, it becomes unnecessary to re-send a dropped packet; a mobile device can solve for the missing packet with minimal processing. That technology, which improves capacity up to tenfold, is currently being licensed to wireless carriers, she says. Between the electricity needed to power computers and the air conditioning required to cool them, data centres worldwide consume so much energy.

by 2020 they will cause more greenhouse-gas emissions than global air travel, according to the consulting firm McKinsey. Smarter software to manage them has already proved to be a huge boon (see A New Net). Many companies are building data centres that use renewable energy and smarter energy management systems (see The Little Secrets Behind Apples Green Data centres). And there are a number of ways to make chips and software operate more efficiently (see Rethinking Energy Use in Data Centres). But network coding could make a big contribution by cutting down on the extra disk driveseach needing energy and coolingthat cloud storage providers now rely on to ensure reliability. This is not the first time that network coding has been proposed for data centres. But past work was geared toward recovering lost data. In this case, Mdard says, we have considered the use of coding to improve performance under normal operating conditions, with enhanced reliability a natural byproduct.

David Talbot
http://www.technologyreview.com/news/513656/a-smarter-algorithm-could-cutenergy-use-in-data-centers-by-35-percent/

21

Exclusive Coverage

THE AMAZING ITW13


Have you ever heard of IEEE? The answer of this question goes back to 1884 when AIEE American Institute of Electrical Engineers was founded, after about 30 years of its foundation, it merges with IRE Institute of Radio Engineers to came up with IEEE Institute of Electrical and Electronics Engineers, which, since then, is considered one of the greatest non profitable organizations dedicated to serve engineers in most of the engineering fields. Along history IEEE was changing from its original name Institute of Electrical and Electronics Engineers to IEEE, Engineering for all. To serve all the engineers in the world, IEEE has a lot of student branches around the globe to help engineers enhance their technical and non-technical skills through their various activities. IEEE Alexandria Student Branch (IEEE AlexSB) is considered one of the oldest student branches in Egypt and the Middle East, and through more than ten years of success IEEE AlexSB has become the biggest student branch in Region 8, that includes Africa, Middle East and Europe, besides being the second largest student branch worldwide. After 5 years of the branchs foundation, the executive board at that time thought of an idea that shall help the students and the engineers in general to know more about our newly evolved world with technology, they thought of making a technical conference that shall collect a lot of technologies through series of technical sessions, and after brainstorming about the conference name, they came up with the name Interact with Todays World and pronounced ITW. ITW05 was a Pre-event for Einstein Symposium with about 450 attendees. Alexscope was a one day event that included several seminars about various topics; it was organized in 2006 and 2007 and was terminated due to conflict with ITW. After 2 years of organizing Alexscope, ITW was back to its main role in empowering engineers to create their future, ITW08 returned with its new style, with series of technical and non-technical sessions, with slogan Open your eyes on the new world and was the reason for making the branch to be awarded the Darrel Chong Silver Award. ITW09 was special with its Panel discussions about research and studying abroad, besides its holding to the Green Technology, 2009 was the second year in a row for the branch to be awarded the Darrel Chong Silver Award. With more than 400 attendees flooding through the Great Hall at BA, ITW10 was organized aiming to link between the academic and the professional fields, speakers were from different organizations, academic and professional, the attendees left the conference looking forward attending the ITW11 conference. With celebrating ten years since the foundation of the branch, IEEE AlexSB through its volunteers and board of executive decided to have a unique ITW with unique vision and unique contribution for advancing our community. This episode of ITW carried the slogan When the Sphinx Talks in Binary. Well known Speakers like Mr. Wael El Fakharany Regional manager of Google, Mr. Ali El Faramawy Vice president of Microsoft international and Eng. Samer El Sahn Founder of Tahrir were invited. With more than 750 attendees, ITW12 was special with being a SPAC event, having a video conference with a professor from MIT, Three international speakers besides having international attendees were attending.

This year, IEEE AlexSB is intending to continue its road to success by creating new ways towards a better contribution in science. To be Continued See you in ITW13. IEEE Alex SB Volunteer

ITW13
T H E C A S T L E O F
For the first time ever in the history of both ITW and Hilton Alexandria Green Plaza to host the IEEE AlexSB globally reputed conference Interact with Todays World. The Hilton did host conferences most recent one was a presidential conference. The ITW13 is expected to give life back to this place and make it one of the places that witnessed the glorious revolutionary ITW. Its kind of rebirth to this place on the record of technology and science, rebirth of life given by Interact with Todays World the 7th.

Its kind of rebirth to this place on the records of technology and science. Rebirth of life given by Interact with Todays World ITW the 7th.

What are the expected added values from this episode? Two thing if I am say; First, A solid and consistent memory dug deep in mind other than a conference in a lecture hall. Second, Different content that will reside in memory since they will be introduced to something they hear for the first time something they will keep in mind for no less than years we put the minds on the enlightening route of search and research more. What challenges are you expecting to face during the organization? The usual staff the timing which is always suitable for some sort of speakers and not for others and the same going with the attendees. Sometimes the different beginning times for some universities might be a challenging matter due to our attendees from international students. Technical content might be more powerful than whats necessary which we fear it to be hard to comprehend roughly. We noticed a certain theme in fliers, posters, and videos, what does this theme represent? It represents a surprise we hope it to be fully achieved when its time. Q: Yes what is the scene behind it? A: Ummmm, A surprise ! For now.

There exist a lot of fans for this event from year to year, what was the impact of this ITW13 publicity on them? Look, I know youre trying to hit something, the Announcement was a bit later, yes, than the habitual launching but it worked in our favor that we really found astonishingly a wide range and long line of people was actually queued waiting for them. This was obvious everywhere; people was extremely active on the event and page 15800 reached approximately, wishing their best sharing their wishes to attend and enjoy over and over again,

What were the main phases during the preparations of ITW13? 1st a Brainstorming in some usual tracks and we focused more and more about the new tracks that we are trying to invade. But basically the usual main 6 tracks; Communications, Electronics, Computer Science, Electrical Power, Mechanical Engineering, Nuclear, and a lot others get in the line and wait to be astounded. 2nd Preparations and launching the early campaign that began with the Sooner than Expected that endured for three weeks Q: Oh yes please Sooner than Expected, what does it point to? A: If you noticed the date of the actual event, it is the first half of September; all the previous versions started from 2011 were all residing in the second half. 3rd You know, putting the puzzles together. Q: What puzzles? A: Selecting and dropping topics, international speakers, and every other thing you would just look with unblinking eyes at them, specially the VIPs. Who are the expected VIPs to witness the event this year? The worthy and the most valuable values that you and I and actually everyone dreamed not just to be like but to just see them on the screens. Q: Names, Sir? A: Lets speak about values then you may have your own story about the names you wont get everything in one interview, will you.

There exist a lot of fans for this event from year to year, what was the impact of this ITW13 publicity on them? Look, I know youre trying to hit something, the Announcement was a bit later, yes, than the habitual launching but it worked in our favor that we really found astonishingly a wide range and long line of people was actually queued waiting for them. This was obvious everywhere; people was extremely active on the event and page 15800 reached approximately, wishing their best sharing their wishes to attend and enjoy over and over again.

Who are the expected speakers this year? Again and Again all what I can promise you, is valuable men with the mic, on the chairs and beyond the screens This year is the year of values and perfection. What is the impact of changing the hosting place of the event on ITW fans? People are asking Why not the great hall?! Everyone should think of it as a positive impact, it is not a weird thing it was usual yes but we want to give ITW13 some fresh air so as everyone can breathe some things different, uppercase the S in some thingS. ITW is ITW everywhere; the Great Hall is a friend of ITW that we will go visiting over and over again. What are the strength points of last years you are intending to keep following this year? Lets begin with last year; we have initiated the international steps by the worldwide speakers and attendees as it was the first to be a SPAC which is a point to reserve. Video conferences are a powerful solution to overcome the obstacles of place in terms of international flavors. This year we will also take triumph over time challenges. ITW11 was special with lots of public figures to attend while ITW12 was special with being SPAC event besides the international speakers, what will be special through this episode? Think of an equation integrates awareness + unique + interaction + loyalty in addition to the satisfaction of the attendees over all levels;

their staying for 8 hours daily, their expectations of huge knowledge festival, their aspiration to a memorial event. I have come baring gifts and thats your second one. How will you guarantee interaction between attendees and the speakers through the event? Topics to achieve its purpose in the first place, then the parallel sessions will assure that everyone is going to their area of expertise this is how we keep them focused and in the mood that encourage them to UHHH open their minds and the speakers mind s too and only then only then, the interaction is assured.

How will you balance between changing the hosting place and keeping the same privileges? A Place is not a judging factor, Quality is a factor of organization not hosting area and for your knowledge we are being hosted in way much more luxury hall. We are not on the same level this year were way ahead. How will ITW13 and IEEE AlexSB add values to each other? IEEE is the source of every single and every point of success for ITW. Anything the eldest son of the branch, ITW, is gaining is pouring into IEEE AlexSB ... ITW is IEEE Alex SBs the exemplary event when you think about the perfection of organization. Everyone does a great deed fears something, what do you fear? Only one thing and one thing only people to walk in to this ITW13 and walks out with nothing different in their lives We want to leave a mark which is going to happen anyway but what I fear is only how deep and long this mark will dwell in everyones heart. How are you serving fans being distributed among different departments in our faculty specially and the college in general? Topics are not serving anyone with certain background. Topics are only seeking passion for knowledge, we hope that everything in technology and science now will be covered in an intensive way that when you turn on the TV or turn to Google searching for something you find ITW13 in the first search results.

their staying for 8 hours daily, their expectations of huge knowledge festival, their aspiration to a memorial event. I have come baring gifts and thats your second one. How do you imagine the closing ceremony this year? How do you feel through it? There is a closing team and I can thoroughly expect something big from, umm something great and a beyond imagination closure something you may consider as a third surprises Q: Surprises again?! A: We are full of surprises, arent we? R: Sure youre

What is your measure of success through this event? A clear value added for anyone even looked inside the hall, saw a teamwork, achieved practical experience etc. My measure of success for ITW13 is when I feel that it has visited every heart in the hall and will remain.

Ahmed Abd El-Kader

What impact has the launching had on the volunteers and the long waited audience? We, as volunteers, always wait for ITW and so our audience too. The spirit that was shared between us and the attendees all over the past episodes of ITW is somehow accumulating its powerful effect from year to year. This great feeling which we felt through those who come to register on the desk is something that no one can ever ignore. There was a lot going and rumoured about changing the hosting place for the ITW, what say you about that? Most of the feedback is showing the interest to try something new as well. Also this to prove that ITW is not a place dependent event, it's the value that we believe is the most important thing in ITW What is the strengths inherited in this version from the past ones? Having a well prepared and a valued content to be presented to the attendees is always our main intention. Also trying to make it possible to cover most of the trending topics these days. ITW11 was special with lots of public figures to attend while ITW12 was special with being SPAC event besides the international speakers, what will be special through this episode? This is a hard question and can't be easily answered but to make it simple, we are trying to make this episode the best ever. We are looking forward to providing the event with what will benefit the attendees and make them feel the change.

How will you motivate the interaction between attendees and the speakers through the event? Through the discussions period that will be given by each session. We are still working on making the interaction better.

The place and the privileges ITW used to keep in the great hall, how will you keep it alive? We focus on the value behind the event and this is our main target. We work to provide others with what will make them work better, think better and to try something new. We are doing our best to make each attendee feels the real value of the event and not to feel anything different by changing the hosting place. As was mentioned before, ITW is not a place dependent event, its a spirit. How will ITW13 and IEEE AlexSB add values to each other? ITW is the oldest event made by IEEE AlexSB. You can say it's like a feeling inherited between many generations. What is made throughout the year - other events than ITW like seminars, workshops, visits - directly impact ITW and so ITW did the same on IEEE AlexSB. No one can separate ITW from IEEE AlexSB as it's the original source of the event. How do you imagine the closing ceremony this year? How do you feel through it? The closing ceremony is to be different this year. Feeling a lot of enthusiasm through the last days and expecting more and more. There is a lot of work is yet to be finished and all of us are working on making these three days a lifetime event to remember forever. In the closing, we share our feelings with others. We share with them all the moments we experienced to make the event meet their expectations.

What is your measure of success through this event? We are working to make our attendees gain something they are really in need to by attending ITW. Feeling the happiness and satisfaction through their eyes is enough to say that they enjoyed ITW'13.

Amr Essam Hassan

38

Quantum computers promise to perform certain types of operations much more quickly than conventional digital computers. But many challenges must be addressed before these ultra-fast machines become available, among them, the loss of order in the systems a problem known as quantum decoherence which worsens as the number of bits in a quantum computer increases. One proposed solution is to divide the computing among multiple small quantum computers that would work together much as todays multi-core supercomputers team up to tackle big digital operations. The individual computers in such a system could communicate quantum information using Bose-Einstein condensates (BECs) clouds of ultra-cold atoms that all exist in exactly the same quantum state. The approach could address the decoherence problem by reducing the number of bits necessary for a single computer.

Now, a team of physicists at the Georgia Institute of Technology has examined how this Bose-Einstein communication might work. The researchers determined the amount of time needed for quantum information to propagate across their BEC, essentially establishing the top speed at which such quantum computers could communicate. What we did in this study was look at how this kind of quantum information would propagate, said Chandra Raman, an associate professor in Georgia Techs School of Physics. We are interested in the dynamics of this quantum information flow not just for quantum information systems, but also more generally for fundamental problems in physics. The research is scheduled to be published in the April 19 online version of the journal Physical Review Letters. The research was funded by the U.S. Department of Energy

(DOE) and the National Science Foundation (NSF). The work involved both an experimental physics group headed by Raman and a theoretical physics group headed by associate professor Carlos Sa De Melo, also in the Georgia Tech School of Physics.. The researchers first assembled a gaseous Bose-Einstein condensate that consisted of as many as three million sodium atoms cooled to nearly absolute zero. To begin the experiment, they switched on a magnetic field applied to the BEC that instantly placed the system out of equilibrium. That triggered spin-exchange collisions as the atoms attempted to transition from one ground state to a new one. Atoms near one another became entangled, pairing up with one atoms spin pointing up, and the others pointing down. This pairing of opposite spins created a correlation between pairs of atoms that moved through the entire BEC as it established a new equilibrium.

The researchers, who included graduate student Anshuman Vinit and former postdoctoral fellow Eva Bookjans, measured the correlations as they spread through the cloud of cold atoms. At first, the quantum entanglement was concentrated in space, but over time, it spread outward like drop of dye diffuses through water. You can imagine having a drop of dye that is concentrated at one point in space, Raman said. Through diffusion, the dye molecules move throughout the water, slowly spreading throughout the entire system. The research could help scientists anticipate the operating speed for a quantum computing system composed of many cores communicating through a BEC. This propagation takes place on the time scale of ten to a hundred milliseconds, Raman said. This is the speed at which quantum information naturally flows through this kind of system.

. If you were to use this medium for quantum communication, that would be its natural time scale, and that would set the timing for other processes. Though relevant to communication of quantum information, the process also showed how a large system undergoing a phase transition does so in localized patches that expand to attempt to incorporate the entire system. An extended system doesnt move from one phase to another in a uniform way, said Raman. It does this locally. Things happen locally that are not connected to one another initially, so you see this inhomogeneity. Beyond quantum computing, the results may also have implications for quantum sensing and for the study of other physical systems that undergo phase transitions. Phase transitions have universal properties, Raman noted. You can take the phase transitions that happen in a variety of systems and find that they are described by the same physics. It is a unifying principle.

Raman hopes the work will lead to new ways of thinking about quantum computing, regardless of its immediate practical use. One paradigm of quantum computing is to build a linear chain of as many trapped ions as possible and to simultaneously engineer away as many challenges as possible, he said. But perhaps what may be successful is to build these smaller quantum systems that can communicate with one another. Its important to try as many things as possible and to keep an open mind. We need to try to understand these systems as well as we can. This research was supported by the Department of Energy (DOE) through grant DE-FG-02-03ER15450 and by the National Science Foundation under grant PHY1100179. The conclusions in this article are those of the principal investigator and do not necessarily represent the official views of the DOE or the NSF.

John Toon
http://www.gtresearchnews.gatech.edu/boseeinstein-condensates-for-quantum-computers/

41
In synthetic biology, the equivalent of a Java virtual machine might be that you could create your own compartment in any type of cell, so your engineered DNA wouldnt run willy-nilly.

Drew Endy

Endy is the co-director of the International Open Facility Advancing Biotechnology BIOFAB, for short where hes part of a team thats developing a language that will use genetic data to actually program biological cells. That may seem like the stuff of science fiction, but the project is already underway, and the team intends to open source the language, so that other scientists can use it and modify it and perfect it. The effort is part of a sweeping movement to grab hold of our genetic data and directly improve the way our bodies behave a process known as bioengineering. With the Supreme Court exploring whether genes can be patented, the bioengineering world is at crossroads, but scientists like Endy continue to push this technology forward. Genes contain information that defines the way our cells function, and some parts of the genome express themselves in much the same way across different types of cells and organisms.

This would allow Endy and his team to build a language scientists could use to carefully engineer gene expression what they call the layer between the genome and all the dynamic processes of life. According to Ziv Bar-Joseph, a computational biologist at Carnegie Mellon University, gene expression isnt that different from the way computing systems talk to each other. You see the same behavior in system after system. Thats also very common in computing, he says. Indeed, since the 60s, computers have been built to operate much like cells and other biologically systems. Theyre selfcontained operations with standard ways of trading information with each other. The BIOFAB project is still in the early stages. Endy and the team are creating the most basic of building blocks the grammar for the language. Their latest achievement, recently reported in the journal Science, has been to create a way of controlling and amplifying the signals sent from the genome to the cell.

Endy compares this process to an old fashioned telegraph. If you want to send a telegraph from San Francisco to Los Angeles, the signals would get degraded along the wire, he says. At some point, you have to have a relay system that would detect the signals before they completely went to noise and then amplify them back up to keep sending them along their way. And, yes, the idea is to build a system that works across different types of cells. In the 90s, the computing world sought to create a common programming platform for building applications across disparate systems a platform called the Java virtual machine. Endy hopes to duplicate the Java VM in the biological world. Java software can run on many different hardware operating system platforms. The portability comes from the Java virtual machine, which creates a common operating environment across a diversity of platforms such that the Java code is running in a consistent local environment, he says. In synthetic biology, the equivalent of a Java virtual machine might be that you could create your own compartment in any type of cell, [so] your engineered DNA wouldnt run willy-nilly. It would run in a compartment that provided a common sandbox for operating your DNA code. According to Endy, this notion began with a group of students from Abraham Lincoln High School in San Francisco a half decade ago, and hes now calling for a commercial company to recreate Sun Microsystems Java vision in the biological world. Its worth noting, however, that this vision never really came to fruition and that Sun Microsystems is no more. Nonetheless, this is what Endy is shooting for right down to Suns embrace of open source software. The BIOFAB language will be freely available to anyone, and it will be a collaborative project. Progress is slow but things are picking up. At this point, the team can get cells to express up to ten genes at a time with very high reliability.

A year ago, it took them more than 700 attempts to coax the cells to make just one. With the right programming language, he says, this should expand to about a hundred or more by the end of the decade. The goal is to make that language insensitive to the output genes so that cells will express whatever genes a user wants, much like the print function on a program works regardless of what set of characters you feed it. What does he say to those who fear the creation of Frankencells biological nightmares that will wreak havoc on our world? It could go wrong. It could hurt people. It could be done irresponsibly. Assholes could misuse it. Any number of things are possible. But note that were not operating in a vacuum, he says. Theres history of good applications being developed and regulations being practical and being updated as the technology advances. We need to be vigilant as things continue to change. Its the boring reality of progress. He believes this work is not only essential, but closer to reality than the world realizes. Our entire civilization depends on biology. We need to figure out how to partner better with nature to make the things we need without destroying the environment, Endy says. Its a little bit of a surprise to me that folks havent come off the sidelines from other communities and helped more directly and started building out this common language for programming life. It kind of matters.

43

Popular texting, messaging and microblog apps developed for the Android smartphone have security flaws that could expose private information or allow forged fraudulent messages to be posted, according to researchers at the University of California, Davis. Zhendong Su, professor of computer science, said that his team has notified the app developers of the problems, although it has not yet had a response. The security flaws were identified by graduate student Dennis (Liang) Xu, who collected about 120,000 free apps from the Android marketplace. The researchers focused initially on the Android platform, which has about a half-billion users worldwide. Android is quite different from Apple's iOS platform, but there may well be similar problems with iPhone apps, Xu said. The victim would first have to download a piece of malicious code onto their phone. This could be disguised as or hidden in a useful app, or attached to a "phishing" email or Web link. The malicious code would then invade the vulnerable programs. The programs were left vulnerable because their developers inadvertently left parts of the code public that should have been locked up, Xu said. "It's a developer error," Xu said. "This code was intended to be private but they left it public." Su and Xu, with UC Davis graduate student Fangqi Sun and visiting scholar Linfeng Liu, Xi'an Jiatong University, China, found that many of the apps they surveyed had

potential vulnerabilities. They looked closely at a handful of major applications that turned out to have serious security flaws. Handcent SMS, for example, is a popular text-messaging app that allows users to place some text messages in a private, passwordprotected inbox. Xu found that it is possible for an attacker to access and read personal information from the app, including "private" messages. WeChat is an instant messaging service popular in China and similar to the Yahoo and AOL instant messengers. The service normally runs in the background on a users phone and sends notifications when messages are received. Xu discovered a way for malicious code to turn off the WeChat background service, so a user would think the service is still working when it is not. Weibo is a hugely popular microblog service that has been described as the Chinese equivalent of Twitter. But its Android client is vulnerable, and it is possible for malicious code to forge and post fraudulent messages, Xu said. The researchers have submitted a paper on the work to the Systems, Programming, Languages and Applications: Software for Humanity (SPLASH) 2013 conference to be held in Indianapolis this October.

44
June, Harvard's Clean Energy Project (CEP) plans to release to solar power developers a list of the top 20,000 organic compounds that could be used to make cheap, printable photovoltaic cells (PVCs). The list, culled from about seven million organic molecules that a crowdsourcing-style project has been crunching over the past twoplus years, could lead to PVCs that cost about as much as paint to cover a one-meter square wall. "We're in the process of wrapping up our first analysis and releasing all the data very soon," said Alan Aspuru-Guzik, an associate professor of chemistry and chemical biology at Harvard Today, the most popular PVCs are made of silicon and cost about $5 per wafer to produce. Silicon PVCs have a maximum solar conversion efficiency rate of about 12%, meaning only 12% of the light that hits them is converted to energy. There is also a small niche market of organic PVC vendors, but their solar cells offer only about 4% to 5% efficiency rate in converting solar rays to energy. In order for a solar product to be competitive, each would need to cost about 50 cents, according to AspuruGuzik. The Clean Energy Project, however, uses the computing resources of IBM'sWorld Community Grid for the computational chemistry to find the best molecules for organic photovoltaics. IBM's World Community Grid allows anyone who owns a computer to install secure, free software that captures the computer's spare power when it is on and idle. By pooling the surplus processing power of about 6,000 computers around the world, the Clean Energy Project has been able to come up with a list of organic photovoltaics that could be used to create inexpensive solar cells. The computations also look for the best ways to assemble the molecules to make those devices. Computational chemists typically calculate the potential for photovoltaic efficiency one organic molecule at a time. Over the past few years, computational chemists have identified a few organic compounds with the potential to offer around 10% energy conversion levels. "But that's only two or three," Aspuru-Guzik said. "Through our project, we've identified 20,000 of them at that level of performance." In fact, CEP's list of molecules include some that have upwards of 13% solar conversion efficiency rates, Aspuru-Guzik said. The computing resources from IBM's World Community Grid are split for the CEP. Some of the computers in the grid are making mechanical calculations of molecular crystals, thin films and molecular and polymer blends; others are making electronic structure calculations to determine the relevant optical and electronic transport properties of the molecules. Harvard has also constructed significant data storage facilities to capture the results of the computations. Each molecular computation produces on average about 20MB of data. In total, the global grid computing architecture generates about 750GB of data per day. So far, the data has grown to about 400TB. Harvard has filled racks of servers with 4Uhigh hard drive arrays. Each array is filled with 45, 7200rpm 3TB hard drives from Western Digital subsidiary HGST. "The data we're creating will ultimately benefit mankind with cleaner energy solutions," Aspuru-Guzik said. "Accordingly, we designed our Jabba storage arrays with built-in redundancies.

But the key to the arrays' performance is the use of reliable, high-capacity, and low-power storage from HGST. We've filled nearly 150 HGST drives to this point and are currently building Jabba 5 and 6 to handle the enormous amount of data generated for the project."

46

With massive amounts of computational power, machines can now recognize objects and translate speech in real time. Artificial intelligence is finally getting smart.
When Ray Kurzweil met with Google CEO Larry Page last July, he wasnt looking for a job. A respected inventor whos become a machine-intelligence futurist, Kurzweil wanted to discuss his upcoming book How to Create a Mind. He told Page, who had read an early draft, that he wanted to start a company to develop his ideas about how to build a truly intelligent computer: one that could understand language and then make inferences and decisions on its own. Such an effort would require nothing less than Google-scale data and computing power.I could try to give you some access to it, Page told Kurzweil. But its going to be very difficult to do that for an independent company. So Page suggested that Kurzweil, who had never held a job anywhere but his own companies, join Google instead. It didnt take Kurzweil long to make up his mind: in January he started working for Google as a director of engineering. This is the culmination of literally 50 years of my focus on artificial intelligence, he says.

Kurzweil was attracted not just by Googles computing resources but also by the startling progress the company has made in a branch of AI called deep learning. Deeplearning software attempts to mimic the activity in layers of neurons in the neocortex, the wrinkly 80 percent of the brain where thinking occurs. The software learns, in a very real sense, to recognize patterns in digital representations of sounds, images, and other data. The basic ideathat software can simulate the neocortexs large array of neurons in an artificial neural networkis decades old, and it has led to as many disappointments as breakthroughs. But because of improvements in mathematical formulas and increasingly powerful computers, computer scientists can now model many more layers of virtual neurons than ever before. With this greater depth, they are producing remarkable advances in speech and image recognition. Last June, a Google deeplearning system that had been shown 10 million images from YouTube videos proved almost twice as good as any previous image recognition effort at identifying objects such as cats. Google also used the technology to cut the error rate on speech recognition in its latest Android mobile software. In October, Microsoft chief research officer Rick Rashid wowed attendees at a lecture in China with a demonstration of speech software that transcribed his spoken words into English text with an error rate of 7 percent, translated them into Chineselanguage text, and then simulated his own voice uttering them in Mandarin. That same month, a team of three graduate students and two professors won a contest held by Merck to identify molecules that could lead to new drugs. The group used deep learning to zero in on the molecules most likely to bind to their targets. Google in particular has become a magnet for deep learning and related AI talent.

In March the company bought a startup cofounded by Geoffrey Hinton, a University of Toronto computer science professor who was part of the team that won the Merck contest. Hinton, who will split his time between the university and Google, says he plans to take ideas out of this field and apply them to real problems such as image recognition, search, and natural-language understanding, he says. All this has normally cautious AI researchers hopeful that intelligent machines may finally escape the pages of science fiction. Indeed, machine intelligence is starting to transform everything from communications and computing to medicine, manufacturing, and transportation. The possibilities are apparent in IBMs Jeopardy!-winning Watson computer, which uses some deeplearning techniques and is now being trained to help doctors make better decisions. Microsoft has deployed deep learning in its Windows Phone and Bing voice search. Extending deep learning into applications beyond speech and image recognition will require more conceptual and software breakthroughs, not to mention many more advances in processing power. And we probably wont see machines we all agree can think for themselves for years, perhaps decadesif ever. But for now, says Peter Lee, head of Microsoft Research USA, deep learning has reignited some of the grand challenges in artificial intelligence. Building a Brain There have been many competing approaches to those challenges. One has been to feed computers with information and rules about the world, which required programmers to laboriously write software that is familiar with the attributes of, say, an edge or a sound. That took lots of time and still left the systems unable to deal with ambiguous data; they were limited to narrow, controlled applications such as phone menu systems that ask you to make queries by saying specific words.

Neural networks, developed in the 1950s not long after the dawn of AI research, looked promising because they attempted to simulate the way the brain worked, though in greatly simplified form. A program maps out a set of virtual neurons and then assigns random numerical values, or weights, to connections between them. These weights determine how each simulated neuron respondswith a mathematical output between 0 and 1to a digitized feature such as an edge or a shade of blue in an image, or a particular energy level at one frequency in a phoneme, the individual unit of sound in spoken syllables.

Programmers would train a neural network to detect an object or phoneme by blitzing the network with digitized versions of images containing those objects or sound waves containing those phonemes. If the network didnt accurately recognize a particular pattern, an algorithm would adjust the weights. The eventual goal of this training was to get the network to consistently recognize the patterns in speech or sets of images that we humans know as, say, the phoneme d or the image of a dog. This is much the same way a child learns what a dog is by noticing the details of head shape, behavior, and the like in furry, barking animals that other people call dogs. But early neural networks could simulate only a very limited number of neurons at once, so they could not recognize patterns of great complexity. They languished through the 1970s.

49
New platforms for fact-checking and reputation scoring aim to better channel social medias power in the wake of a disaster.
The online crowds werent always wise following the Boston Marathon bombings. For example, the online community Reddit and some Twitter users were criticized for pillorying an innocent student as a possible terrorist suspect. But some emerging technologies might be able to help knock down false reports and wring the truth from the fog of social media during crises. Researchers from the Masdar Institute of Technology and the Qatar Computing Research Institute plan to launch Verily, a platform that aims to verify social media information, in a beta version this summer. Verily aims to enlist people in collecting and analyzing evidence to confirm or debunk reports. As an incentive, it will award reputation pointsor dingsto its contributors. Verily will join services like Storyful that use various manual and technical means to factcheck viral information, and apps such as Swift River that, among other things, let people set up filters on social media to provide more weight to trusted users in the torrent of posts following major events. On Reddit, amateur sleuthing to identify possible bombing suspects led to accusations against a student, Sunil Tripathi, a Brown University student reported missing weeks earlier (an apology has since been issued by Reddit); that accusation was then tweeted and retweeted many times. The underlying problem is a fearsome onepeople want to share and spread information, whether accurate or not, says Ethan Zuckerman, who directs the center for civic media at MIT. Were very far from a solution.

The reporting around the Marathon bombing demonstrates that mainstream media has issues with verification that are as profound as anything we face online. Reputation scoring has worked well for ecommerce sites like eBay and Amazon and could help to clean up social media reports in some situations. Research efforts have also shown how to effectively mobilize many people on social media for a common task. In a 2009 experiment, the U.S. Defense Advanced Research Projects Agency offered $40,000 to the first team that could identify the locations of 10 large red weather balloons lofted by DARPA at undisclosed locations across the United States. The winning team, from MIT, did it in less than nine hours using an incentive structure, fueled by cash rewards, to drum up viral participation on social media. Anyone who found a single balloon would get $2,000; someone who invited that person to join the hunt would get $1,000. A similar but harder challenge, in 2012, asked teams to find specific individuals within cities within 12 hours with only a single mugshot to work with. There again, a distributed cash reward system worked best. Verily builds on lessons from both contests. The winning mugshot team included one of Verilys creators, computer scientist Iyad Rahwan, a graduate of MIT who is now at the Masdar Institute of Technology. Recruiting people to join is part of the issue, but we also need to figure out how to remove false reports, Rahwan says. Where the balloon challenge took nine hours, we hope to facilitate the crowdsourced evaluation of multimedia evidence on individual incidents in less than nine minutes. The beta version of Verily will first be tested by its creators on a real-world weather disaster such as a hurricane or flood. Since such disasters come with some warning, Verilys creators can prepare humanitarian agencies to use the platform. A piece of reported newssuch as a photo of a flooded hospital circulating on Twitterwould be posted to Verily with a question: is the hospital really flooded?

Users would then examine the photo for signs of authenticity and also leverage their own social networks to investigate its authenticity. Humanitarian agencies working in the region could promote participation, as could the press and Twitter. Voters reputation scores would increase or decrease over time; future votes from reliable people would get increased weight. And voters would be encouraged to bring others to the site; anyone brought in by someone with a good reputation would automatically start with a higher reputation themselves. In many ways the platform is meant to resolve a design problem inherent in sites like Reddit, adds Patrick Meier, director of innovation at the Qatar institute who is a cocreator of Verily and former director of crisis mapping at Ushahidi, the online incident reporting platform (see Crisis Mapping Meets Check In). They dont have the design to facilitate these kinds of workflows and collaboration, he says. Verify could provide a rapid means to vet reports arising on sites like Reddit. The other approaches are more basic. Storyful verifies videos to make sure news organizations dont get duped by phony ones. Staffers check veracity based on clues like weather reports, the angle of the sun, and visual landmarks. And beyond the Swift River app is a larger platform aimed at letting humanitarian and other agencies manage and make sense of social media reports and other data. Meanwhile, old-fashioned methods of finding the truth are holding up pretty well. In Boston, the marathon bombers were actually found through conventional witness reports and reviews of video surveillance camera footage at retail stores.

51

FAIMS project prepares for public beta of Android-based digital tools for archaeologists.
Researchers at the University of NSW are preparing to launch a public beta of a new open source system that could drive a digital revolution in the field of archaeology. the Federated Archaeological Information Management System (FAIMS) Project, led by UNSW's Dr Shawn Ross, a senior lecturer at UNSW's UNSW's School of Humanities, received funding from the federal government's the National eResearch Collaboration Tools and Resources (NeCTAR) program. The aim was to develop a new generation of archaeological tools that could work with modern Android-based mobile devices and promote the production of compatible datasets from different archaeological projects. Developers at UNSW, the Intersect research consortium in NSW and VeRSI in Victoria have worked on a prototype Android app that can be used to gather data in the field that is then collated in a server-based database system. A workshop held at UNSW in August last year had made clear that a key barrier for the project was the different workflows and terminology employed by archaeologists. "If you go out and you run an archaeological project and use an Access database, when the time comes to put that in to a repository, somebody either you or the repository manager has to spend a lot of time doing manual ontology mapping," Ross explained. Differences in terminology can extend to even the most basic parts of an archaeological project, for example the volumes of earth excavated at a site. "Some projects call them a context, some call them a locus, some call them a spit, some call them a stratographic unit there's about five different terms for even this most fundamental thing," Ross said. "The way things are now, for every single column in your database that you've got, you'd have to map it to the internal ontology of whatever repository you're putting [the data] into and that's really time consuming; it's the biggest expense and time cost for repositories." To overcome this, developers spent time making sure the database used by the app can be customised to suit the terminology and workflow used by different archaeologists. "We produced a fairly a well-structured but generic underlying database," Ross said.

The app uses a domain/key normal form database. "You can customise it [for a] project just by changing the data in database rather than the structure of the database. We looked at a number of NoSQL solutions for this, and they're just not mature enough for the Android environment, especially since we had to have them working offline and working with GIS [geographic information system]." The team went with a DKNF database using SQLite with SpatiaLite extensions. To customise the app for individual projects or teams, XML documents can be fed in that govern the database schema, the user interface and the logic for the interface. "We're really happy in the end with that solution having this generalised database that you can customise with a packet of XML documents and we've already got two students who are working as QA people over at Intersect who have learned to write the XML documents," Ross said. The design "fits with modern design principles that not everyone needs to roll their own," Ross said. "You can have an underlying data store that you can use [but] each project can define their schema and interface [and] their workflow however they want, but at the same time to try to produce compatible datasets because we're trying to encourage research at not just one site but across sites, across projects." The developers also employed techniques based on international standards for localisation, using string replacement so that the terms used in the app's interface can reflect the terms used by a particular archaeological team. Ross said that the app reflects a balance between offering the flexibility desired by archaeologists while still promoting the production of compatible datasets. The app allows the recording of text, location, imagery and audio data on Android devices, and a server can be deployed at archaeological sites that, when devices come within range, will synchronise the data on them as well as back it up on the server in an append-only data store (to offer versioning of data, if there needs to be a rollback, for example).

"If you're working in the city and let's say you're processing pottery where everyone is more or less in the same place at the same time, everybody [can work] on their mobile devices and it will automatically synchronise them in more or less real time," Ross said. "But let's say you're doing prospection. You're going out to the middle of nowhere in Western Australia and you're sending teams out and they're going through the countryside looking for any kind of archaeological remains. "You can send these teams out with their separate mobile devices, they can work totally offline, totally apart from each other, and then at the end of the day when everybody comes back to the base or the camp [the devices] will find the server, hook up to the server, synchronise with one another and a copy of everything goes on the server." Page: As well as being able to attach data captured by mobile devices themselves to records, the system will allow data captured by other devices images captured by high-resolution SLR cameras or drawings done by hand, for example to be linked to records. The team intends to start conducting field testing with third parties, such as archaeological consultancies, by late April or early May. "At first we'll run [the system] side-by-side with their existing systems so it will give us a chance to customise it for them make sure it's working without having any high risks of data loss," Ross said. "We'll also be doing a study of what the time comparisons are like; what kind of efficiency gains you can expect to get from using this system compared to existing workflows. We're pretty much planning that by mid-year we're going to be ready to really start working intensively with a larger group of users to get the system deployed in the field." "We'll work with five to 10 projects to customise and deploy it," Ross said. "We'll work with the consultants or archaeologists to produce the XML documents, go out with them into the field and work very closely with them on customisation and implementation.

"We'll also be doing a study of what the time comparisons are like; what kind of efficiency gains you can expect to get from using this system compared to existing workflows. We're pretty much planning that by mid-year we're going to be ready to really start working intensively with a larger group of users to get the system deployed in the field. "We'll work with five to 10 projects to customise and deploy it," Ross said. "We'll work with the consultants or archaeologists to produce the XML documents, go out with them into the field and work very closely with them on customisation and implementation. If they hit a problem or a bug or there's a feature that's not there or doesn't work right, then we'll be able to go back to our developers and turn that round really quickly." Ross said the team was keen to receive another year of funding support to continue working with clients, as well as add more features, such as the ability to run the whole system online without a local server if the users are in a networked environment. "Mostly we'd like to continue this intensive work with clients over the next year, and I think if we can do that, with the kind of uptake it looks like we're likely to get, we should be able to have a sustainable system that will have enough revenue [to sustain itself]. The software is open source but an organisation, most likely the Intersect consortium in the short to medium term, would offer support, such as aiding with customisation.

You might also like