Welcome to Scribd. Sign in or start your free trial to enjoy unlimited e-books, audiobooks & documents.Find out more
Download
Standard view
Full view
of .
Look up keyword
1Activity
0 of .
Results for:
No results containing your search query
P. 1
ieee-remotee

ieee-remotee

Views: 13|Likes:
Published by silverair

More info:

Published by: silverair on Mar 23, 2012
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
Visibility:Private
See more
See less

08/11/2014

pdf

text

original

 
Remote Virtual Peripheral Framework: EnablingDynamically Composed Devices
Felipe Gil-Casti˜neira
Departamento de Enxe˜nar´ıa Telem´aticaUniversidade de Vigo, E.T.S.E. Telecomunicaci´on.R´ua Maxwell S/N. Campus Universitario.36310 Vigo (Spain)Email: xil@det.uvigo.esTelephone: +34 986-812-174
Raja Bose
Nokia Research Center955 Page Mill RoadPalo Alto, CA 94304-1003 (United States)Email: raja.bose@nokia.comTelephone: +1 650-496-4400
 Abstract
—As the number of mobile devices grow at a rapidpace, the heterogeneity in their sensing and user interactioncapabilities is also increasing. Mobile devices are increasinglybeing equipped with sensors such as accelerometers, gyros andother gesture sensitive capabilities which propel user experiencesbeyond the usual keyboard-mouse-touch paradigm. A naturalquestion is how can specific sensors of different devices inter-operate on-demand in a seamless fashion so that they can becomposed together into a user interface providing a rich andnatural user experience. For example, it should be possible touse the accelerometer in a mobile phone as an input deviceto control a video game running on a PC, or a high-definitioncamera attached to a television to act as the input video source fora video chat session running on a mobile phone. In this paper wepresent a system framework and architecture which enables theseamless interoperability and composition of sensing and userinteraction capabilities of different devices into a unified userinterface, in an application-agnostic manner.
 Index Terms
—Pervasive systems, middleware, embedded sys-tems, device composition.
I. I
NTRODUCTION
Imagine you are playing your favorite 3D game on yourmobile phone using the phone’s accelerometer to control yourmovements and you would like to continue to use the samenatural gestures when you want to play the game on yourdesktop PC. Or imagine that you are using your phone’snavigation software while walking but once you get in yourcar, you would like the same software to seamlessly startusing the car’s more accurate embedded GPS system withoutskipping a beat.While such scenarios can be implemented by targettingspecific applications and modifying them [1] [2] [3], a moreapplication-agnostic framework-based approach is requiredwhere applications designed for utilizing specific user in-teraction capabilities can run without modification on otherplatforms which do not have those capabilities. In such ascenario, the applications should be able to interact with andget user input from external devices in the same manner asinput peripherals which are local to the system.In this paper we present a framework and architecture calledthe Remote Virtual Peripheral Framework, which enablesuser input mechanisms present in one device to seamlesslyinteroperate with applications running on another device overany IP-based connection. Furthermore such interoperability isprovided in a transparent manner such that the application isnot aware whether that it is accessing a peripheral which isphysically attached to its host device or is part of an externaldevice.The Remote Virtual Peripheral Framework can be usedto implement a new usage model for mobile devices andassociated applications. The embedded peripherals in mobiledevices are very convenient input mechanisms to interact withfixed desktop computers or with home entertainment devicessuch as Set Top Boxes, projectors and television. For example,the accelerometer of a smartphone (such as the Nokia N900)can be used to interact with desktop versions of games inways not possible using a desktop or laptops usual inputperipherals. Moreover, a device’s touchscreen can be used as aremote input device to browse the Web in a connected TV. Thedesktop PC and Set Top Box applications will be developedin the same manner as if they are only interacting with locallyconnected input peripherals. Furthermore, the new (remote)input devices can be made available to the existing applicationswithout requiring modifications. For example, it is easy to addthe remote phone touchscreen as a conventional input devicefor an existing web browser running on a Set Top Box.This rest of this paper is organized as follows: in SectionII we discuss related work, in Section III we describe thesystem framework architecture and a reference implementationfor the Nokia N900 smartphone and finally in Section IV wesummarize with our conclusions and future work.II. R
ELATED WORK
In this section we present related work in the field of the device integration and composition, or equivalently, work related to protocols, technologies and architectures to useperipherals or components from remote devices transparentlyas local devices.This research area, can be viewed as an evolution of thedistributed systems in
smart environments
. In those environ-ments, a group of networked devices and services exposewell defined programming interfaces to create distributed
The 8th Annual IEEE Consumer Communications and Networking Conference - Emerging and Innovative Consumer Technologiesand Applications
978-1-4244-8790-5/11/$26.00 ©2011 IEEE405
 
applications allowing the end users to be able to interact withand control their environment [4].This work is also closely related to the ubiquitous com-puting paradigm, in particular Activity Oriented Computing(AOC). This paradigm provides solutions in an user-centricapproach, composing and deploying services to satisfy userdemands [5]. Previous works in this field study how tocompose services proactively or reactively and they focus onthe development of self-configurable and adaptive solutions.Therefore, our proposal can be used as a basis to facilitateaccess to devices and to construct distributed services in amore transparent and flexible way. As shown in this section,there has been prior research in this area, but our approachis fundamentally different due to the fact that we propose alow-level middleware architecture which creates virtual localperipheral devices corresponding to remote device peripher-als, thereby allowing applications to utilize remote deviceperipherals in the same manner as local peripherals, withoutrequiring any modifications.
 A. H.325
H.325 [6] is a ITU-T initiative started as an evolution of multimedia protocols such as H.323 and SIP. It is also knownas the Advanced Multimedia System (AMS), and is beingdesigned as an advanced architecture to support distributedmedia applications involving multiple personal and public de-vices. The idea is to advance beyond the traditional telephonysystems, creating complex communication environments inwhich the user could make use of several devices during asession. For example, a user could start a voice call with hisphone and add the input video stream from a fixed camerasituated in the same room, send the video and audio streamfrom the remote peer to a TV and even share a document oran application being executed in a mobile computer.As a design principle, H.325 starts from the assumption thatevery application may exist as a group of distinct componentsthat may be physically separated from the user terminal, andin consequence the ITU-T H.325 work group is defining thecomponents and interfaces to make this possible. The userterminal (usually a mobile phone), will coordinate the network elements and the communication between peer applications(components) and will be the element responsible for estab-lishment and tear-down of communications.This approach clearly differs from the one proposed in thispaper, because we try to simplify the application developmentby abstracting away the fact that the application is using acomponent or a resource from a remote device. Hence, ourframework can be used to achieve similar results as H.325however, in a much more simpler way.
 B. Remote USB
The advent of remote display protocols in conjunction withthe popularization of the USB enabled devices, provided theimpetus to make available to the remote server, the USBdevices plugged into a client device.This problem has been solved by several commercial ap-plications and there are also some open proposals such as[7] where the authors present a USB/IP layer to create aUSB extension over IP. To achieve this objective they addeda Virtual Host Controller Interface (VHCI) driver, as a virtualbus driver (i.e. a USB Host Controller Driver, that receivesrequests from a USB core driver and delivers them to thecorrect device). This virtual driver collects all the messagesfrom the upper layers in one device and sends them to astub driver in the remote computer. They also implement aclient and a server to manage the lifecycle of virtual USBconnections.
C. Network Block Device
There are different ways to share a disk over the network,and since early 1980s several implementations such as NFSand SMB/CIFSare available for providing remote file access,especially across UNIX machines. However, sometimes theoverhead introduced by the upper level protocol is not accept-able or desirable. In that case there are industrial solutions tocontrol remotely the hard disks at a block device level suchas Storage Area Networks (SAN), Internet Small ComputerSystem Interface (iSCSI) and ATA-over-Ethernet (AoE). An-other solution [8] is already implemented in the Linux Kernelnamely, the Linux Network Block Device (NBD). NBD isimplemented as a daemon in the server (
nbd-server
) anda kernel module and a user level application in the client. Thisapplication is used to tell the kernel module where to mountthe remote block device, creating a local device
/dev/nbdX
which can be accessed from local applications like any otherdisk in the system.
 D. D-Bus based user device drivers
In this paper we follow two different approaches fordesigning and implementing the Remote Virtual PeripheralFramework namely using D-Bus and using Linux characterdevice drivers. However there are already proposals for ar-chitectures to move the device interface from the kernel toD-Bus. For example, in [9] the authors implemented a devicedriver framework denominated User Device Driver (UDD).To achieve this objective, they decided to implement as lesscode as possible in the Linux kernel and make use of theUserspace I/O (UIO) [10] system. This is a mechanism toimplement most of the code to handle a device in user space. Itis necessary to write a small kernel module, and the remainderoperations are executed by a user level process, in this casea daemon responsible for handling the device operations andoffering a D-Bus interface.We have not used this framework, but this architecturecould be useful in a future to move the lower level devicekernel interfaces to a higher level (D-Bus), and unify ourframework into a single version which utilizes the D-Bus forall interoperability use cases.III. R
EMOTE
V
IRTUAL
P
ERIPHERAL
F
RAMEWORK
In this section we present the proposed framework andarchitecture for remote virtual peripherals. During the design
406
 
Fig. 1. Attitude application, being executed in a Nokia N900 device
of the Remote Virtual Peripheral Framework, two principleswere kept in mind:
Keep the architecture simple and, if possible, withoutmodifying existing components.
Allow existing applications the usage of remote devices,without modifying their code (even, without recompilingthem, if they are executed in machines with the sameCPU architecture).Two versions of the framework have been developed, eachoperating at a different system level. The first version workson top of the D-Bus architecture which shares D-Bus APIscorresponding to available peripherals between the two partic-ipating devices. The second version works at a lower level byexposing kernel character device interfaces for peripherals onboth sides. In the future, we would like to consolidate bothversions into a single unified framework.As a
proof of concep
, we have implemented the pro-posed frameworks on a Nokia N900 smartphone running theMaemo5 operating system. Maemo5 offers two interfaces tointeroperate with the embedded sensors and peripherals: a D-Bus API and a sysfs filesystem. For example, we can runapplications like
Attitude
(Figure 1), that are designed to readacceleration information from a Nokia N900 device, in aLinux PC executing our Remote Virtual Peripheral framework (Figure 2). By utilizing our framework, the application doesnot need to be modified at all. Moreover, in this particular casethe
Attitude
application is written in Python, so it is not evennecessary to recompile it to be executed in a x86 (Linux PC)or an ARM (N900) architecture.
 A. UPnP
In both framework versions, each device is exposed as aUPnP service. UPnP [11] takes advantage of existing stan-dardized protocols (IP, TCP, UDP, HTTP, SOAP and XML)to support automatic discovery of devices and services. Bothframeworks utilize UPnP to discover devices in the samenetwork with specific peripherals which can be accessedas remote virtual peripherals. For our implementation, we
Fig. 2. Attitude application, running in a PC but using our framework asthe input device to read the accelerometer information from a Nokia N900
have used the GUPnP (http://www.gupnp.org/) open sourceframework.
 B. D-Bus based Framework 
D-Bus [12] is an Interprocess Communication (IPC) system,designed to replace the CORBA and DCOP IPC mechanismsused before, respectively, in the GNOME and KDE desktopenvironments. It is a message bus system for GNU/Linuxsystems maintained by freedesktop.org. Nowadays, it is a keyelement in both desktop-based and mobile operating systemswhich use Linux. Specifically, D-Bus is used to realize 3 mainfunctions:
Exchange data between (system or user) processes.
Facilitate sending events and signals through the system(for example, an incoming call could mute the mediaplayer).
Invoke methods and request services from other objectsor applications.One important characteristic of D-Bus is that it uses mes-sages as the base unit for IPC, not a bit stream (in contrast withother IPC mechanisms). The message format is binary, typed,fully aligned and simple. Another important characteristic isthat it is a bus-based communication service. Applicationscan instantiate a private D-Bus to communicate, but the D-Bus architecture includes a message bus daemon that routesmessages from an application to one or more processes. Infact, there are two different buses, namely:
The
System Bus
: This is a system-wide bus running atthe system level and is used to send events such as thepresence of new devices or the battery level.
The
Session Bus
: This is a per-user and per-sessiondaemon, used for general IPC among user applications.In context of the Remote Virtual Peripheral Framework the system bus is the one which is utilized as a channelfor transmitting information to/from the remote embeddedperipherals. In D-Bus messages are sent to objects, which areaddressed using path names. An object could implement one
407

Activity (1)

You've already reviewed this. Edit your review.
1 hundred reads

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->