Professional Documents
Culture Documents
DISCLAIMER The Linux Foundation frequently works with subject matter experts on Linux educational pieces. In this publication, famed Linux Devices editor Henry Kingman has created a fun history and background on embedded Linux; we hope you enjoy Henrys unique perspective on this fast-moving industry. The opinions found in this publication are Henry Kingmans alone and dont necessarily reflect those at the Linux Foundation.
Overview
This paper offers historical perspective and advice for those considering or planning to embed Linux. Before we plunge in, lets dene some terms: By Linux, we mean any stack that includes a Linux kernel. Besides the kernel itself, stacks typically include a great deal of other software typically the GNU C libraries, some graphics stack (like xorg), and the vendors particular software value add. However, a Linux device may include little more than a bootloader, bootstrap loader, and initramfs (i.e., a Linux kernel and a module or two running on a ramdisk). By embedded device, we mean any device with a microprocessor, exclusive of PCs and servers. This describes the vast majority of computerized systems, since by most analyst estimates, PCs and servers consume only 1-5 percent of all microprocessor chips. The rest go into that vast, diverse, hard-to-generalize-about category known broadly as the embedded or device market. Many such systems, perhaps two thirds, have 8- and 16-bit microcontrollers that do not support rich 32-bit OSes like Linux. Even so, Linux-capable devices with 32-bit processors still far outnumber PCs and servers. Just consider all those devices everywhere around us: mobile phones, tablets, set-top boxes, networking gear, surveillance cameras, auto infotainment systems, HVAC and factory control systems, remote sensing stations, cellular radio towers... and on and on and on. Across the vast diversity of systems encompassed by this denition, Linux is a strong contender -- if not the de facto standard. By the mid-Otts, most analysts were recognizing Linux as the top embedded OS in terms of design starts and design wins if not devices shipped outright.
The History of Embedded Linux & Best Practices for Getting Started
At the other end of the spectrum, Linuxs default ext4 lesystem has incredible theoretical size limits (that may in practice be limited by associated tools like ext4defrag). Furthermore, more mature alternative lesystems like xfs and jfs offer something for the most storage-hungry enterprise/datacom devices. Linux has supported processors with 64-bit memory addresses since around 1995, when Linus Torvalds completed the DEC Alpha port. Alpha was the rst non-x86 port completed intree (a PowerPC tree also saw feverish development at the time). These early porting efforts laid the foundations for Linuxs subsequent explosion in devices, which tend to use non-x86 architectures due to their inevitably lower electrical power requirements. On processors with 64-bit memory addressing, Linux supports a nearly unimaginable RAM size. On 32-bit processors, the kernel is usually built nowadays with support for physical address extension (PAE). PAE supports physical memory sizes up to 64GB, rather than the 4GB theoretical max that 32-bit processors would otherwise top out at. Linux may not be certiable for the very highest security levels (EAL 5 and greater). This is because the kernel itself is large, even in its most stripped-down form. That, in turn, makes truly exhaustive code line audits simply impractical. This limitation prevents adoption in esoteric markets, like top secret military devices, or medical devices deployed inside the human body. That said, Linux has an excellent reputation for security and reliability, and sees widespread use in military applications, such as drone ghters and other autonomous vehicles. It is also widely used in medical gear that collects and processes data from hardware sensors implanted inside patients. Linux has even seen space duty, in several British and U.S. (NASA) rocket ship launches! A standalone kernel.org Linux kernel (without any real-time add-ons) may not be practical for the most stringent hard real-time requirements, like if you need interrupt latency guarantees lower than 2-10ms (depending on your hardware). The threshold is always trending downward, however, as kernel.org continues to assimilate more and more code from out-of-tree realtime projects. Furthermore, where true hard real-time really is needed, a multitude of free and commercial projects exist to pick up the slack (so to speak).
The History of Embedded Linux & Best Practices for Getting Started
The 1980s saw the arrival of the rst general purpose commercial operating systems designed especially for embedded systems. Sometimes, these are called off-the-shelf RTOSes. Hunter and Ready came out with their Versatile Real-Time Executive (VRTX) in 1980, pioneering a new market segment, according to most computing historians. Today, VRTX is owned by Mentor Graphics. It supports ARM, MIPS, PowerPC, and other RISC architectures, with or without memory management hardware. VRTX still has some prominent design wins -- the Hubble space telescope, as well as the software-controlled radio layer in Motorola mobile phones. A raft of other general purpose RTOSes followed. Several achieved greater commercial success than VRTX. By the mid-1980s, Wind Rivers VxWorks had emerged as the market leader. Others found their specialized niches, like Green Hills Integrity for super high-security applications, or QNX Neutrino, for running your special real-time program alongside more or less normal Unix apps.
The History of Embedded Linux & Best Practices for Getting Started
MontaVista, incidentally, was co-founded by Jim Ready. The same Jim Ready who co-founded by Hunter and Ready Systems, creators of the rst off-the-shelf RTOS. So right from the start, embedded Linux had some pretty smart money behind it. Today, Lineo survives in Japan, and serves the consumer electronics industry. MontaVista, too, remains alive, albeit as a wholly owned but reportedly independent subsidiary of chip and network appliance specialist Cavium Networks. Cavium claims to have been founded by some of the talent liberated when DEC disbanded its Alpha chip team. Cavium also employs substantial engineering resources in India. Most of its chips use multi-way (typically 4- to 16-way) MIPS cores, together with a lot of networking-specic coprocessors. Most of the microprocessors the company produces seem to go into its own line of low-cost, extremely power-efcient networking gear, most of which targets the small to mid-size enterprise market. During the early Otts, a raft of other companies began to jump on the embedded Linux bandwagon. By 2001, there were half a dozen embedded Linux tool and OS startups. By 2003, Linux was even being adopted almost universally by traditional RTOS vendors. Sure, Linux itself was free, but commercial vendors could offer value-adds like tools integration, support guarantees, license indemnity, and so forth. The largest RTOS vendor, Wind River, began dabbling in the Linux development tools market as early as 2003. When it added Linux versions of its agship RTOS stack products around 2004, no one could deny any longer that Linux truly had arrived as a viable commercially supported embedded RTOS option.
Many Eyeballs
The rst embedded Linux vendors based their products on Linux distributions like Red Hat and SuSE that were already in widespread use. The testing and integration these distributions got in the server and workstation markets made for an easier-to-use, more stable product, the thinking went. This strategy aimed to leverage Linuss Law, the theory proposed and named by Eric S. Raymond. It states that given enough eyeballs, all bugs are shallow. In other words, the more use software gets, the more likely that bugs will be found and xed. The fewer bugs software has, the more people are likely to use it. This virtuous cycle is sometimes likened to the network effect, whereby the value of something like a telephone network increases as more and more people join it. At the time, though, applying widely used, general purpose software in devices clashed wildly with accepted embedded practices. Many device engineers wrote Linux off, seeing it as too large, too unpredictable (it lacked hard, real-time determinism), and too hard to test using automated testing routines. Devices must have toaster-like reliability, the thinking went, and the last point, in particular, raised concerns. Linux had two things going for it, though.
Linux is MMU-tiful
Traditional embedded RTOSes of the 1990s primarily used a at memory model. The engineer would allocate a chunk of memory, and their application was expected to stay inbounds, with little or no supervision from the OS or from the underlying hardware. When code failed to behave as expected, it might over-right some memory registers belonging to another application, or even the RTOS. Thus, a crashing application in such a system could easily cause the whole system to lock up.
The History of Embedded Linux & Best Practices for Getting Started
Such bugs were notoriously difcult to nd. If they emerged after a product shipped, an expensive recall could result. Thus, most devices of the day underwent exhaustive automated testing that exercised each routine throughout its complete range of possible variable values. Well-tested embedded systems with at memory models were often elegant, minimalistic, efcient, and very stable. However, the cost was high, due to the involvement of specialized personnel (QA engineers) and lengthy testing processes that posed an inconvenient barrier to market entry. Linux, meanwhile, was a virtual memory OS that, thanks to its desktop origins, was ready to leverage hardware memory management units (MMUs). At that time, the early Otts, MMUs were increasingly common, even on very low-cost 32-bit embedded processors. As a virtual memory OS, Linux and the applications that run under it do not address actual physical memory addresses. Instead, each process receives a separate virtual memory address space, with MMU hardware enforcing the separations. This makes it impossible for one application to write to or erase memory not previously allocated to it. The practical result is that a virtual memory operating system sometimes called an operating system with memory protection is much better insulated from application crashes. The main advantage to using a VM OS like Linux in an embedded system is that the burden of testing may be reduced considerably. Today, the embedded Linux systems that power digital cameras, printers, or television sets may routinely endure application faults, without the user ever knowing. Linux keeps running, and simply reboots the program guide or menu or player application, with the user noticing little more than a slight pause. It is easy to see the appeal of virtual memory, compared to the rigorous testing approaches of earlier device OSes. In the early days of embedded Linux, Linuxs memory management model was its most-talked-about technical feature. The downside is that virtual memory OSes require considerably more storage and memory. That could be a deal-breaker in cost-sensitive, high volume markets. So, pretty early on, a version of Linux for MMU-less processors appeared. Called uClinux (You See Linux or rarely Microcontroller Linux), its development was driven largely by an Australian company selling low-cost enterprise networking gear. Today, uClinux remains widely on deeply embedded systems. Best Practice: Unless you have significant time, experience, and staff to rigorously test your application code for high reliability and security in all foreseeable use cases, consider employing a processor with an MMU, and a version of Linux (i.e., not uClinux) that supports memory management. The other big, often discussed Linux feature, early on, was the possibility it gave companies to achieve better vertical integration of their product development process. What does that mean? The best way to explain it may be to look at the relative weakness of traditional RTOSes in this regard.
As noted, another way to describe this is that using Linux allows companies to increase their vertical integration, the total amount of the product stack that they control. Note: A couple of Linux vendors have experimented with royalty-bearing price models, through the years, particularly in high-end markets, like high-availability telecommunications. However, the author is not aware of this approach ever being particularly successful. The second Achilles heel that product companies see when they look at an RTOS vendor is: there is this company, in your product, and you have to rely on them for everything. Drivers, tools, library extensions... All your base are belong to us, as geeks of the late twentieth century liked to say, for reasons forgotten in the mists of time. Something about a poor translation of a pirated game. Unless it signs all the NDAs, and pays another big chunk for a source code license, a product company may nd itself wholly dependent on an RTOS provider, for just about everything. Please take a number... so much for time-to-market. Meanwhile, with Linux, you have the source, at no extra charge. The worlds best documentation cant compare with being able to read, debug, and tweak the code yourself. Best Practice: Try to take ownership of as much of your firmware stack as possible. Such vertical integration makes you less prone to costs and delays introduced by additional supplier partnerships. And that brings us to the last big puzzle piece. Weve looked at Linuxs technical edge as a VM OS, and how it let product companies gain better vertical integration. There was one other little thing that, though discussed last, was denitely not least.
OS. More drivers meant more consumer choice that lowered your parts bill-of-materials, too. And then as Linuxs popularity grew, even more developers became Linux contributors. Huge corporate benefactors like IBM, Intel, and HP notwithstanding, Linuxs popularity in devices, combined with its license, arguably created network effects leading to a truly massive numbers of contributors. And it would appear in the rear-view mirror of history that Linus Torvalds, descended from Finlands Swedish-speaking administrator class, was just the right bureaucrat, at just the right time, to benevolently govern all of that wild, creative computer programming energy. From the patches that made Linux portable, to Linuxs rudimentary embedded lesystems (squashfs and cramfs), to the git tools for distributed (key word, there) source code management, Linus and friends always seems to come up with just what it takes to preserve and build on Linuxs momentum. Best Practice: When possible, choose components that are likely to benefit from the collaborative evolution resulting from open source licenses with stronger copy-left obligations (more onus on users to share their work back into the community).
be a few Blackberry holdouts remaining by the time this paper comes out. Best Practice: Use an open source OS such as Linux if your consumer will never care or need to know about the OS brand. Also consider using Linux when your OS will have a brand, but you wish to retain full control over all brand messaging, like the words on the screen, packaging, and advertisements. This is what Google did with its Android OS for mobile phones, for example. On the other had, if you prefer to leverage the considerable marketing investments of third party OS providers like Microsoft, Wind River, Green Hills, QNX, and so on... then by all means, evaluate those products. They probably have not stayed in business as long as they have, in most cases, by selling stuff that doesnt work.
The History of Embedded Linux & Best Practices for Getting Started
More recent open source projects like the excellent ConnMan are bringing Linuxs network prowess further up the stack, too. Even the Network Mangler, er, Manager, is incredibly good nowadays. Believe it or not, with many RTOSes, networking was once an add-on you paid extra for. With Linux, it is built right in, fully congurable, as simple or cutting-edge as you wish. Linuxs networking prowess played well in the early Otts, when suddenly, consumers expected to connect every device, be it a camera, printer, ipod, burglar alarm, you name it. Furthermore, consumers wanted to remotely congure everything via a browser-based interface. They didnt want to have to install something on their computer just to control a device. Ironically, it was really Microsoft that was responsible for this. Experience with Windows left most people with the impression that the more software you install, the slower your computer gets. This may be true only of OSes with application registry databases. But nevertheless, in the early Otts, client-server was out, and the web apps were in in in. Linux benets from at least ten different production-quality, embeddable open source webservers, many with CGI, lightweight scripting engines like Lua, and even AJAX support. If you need SSL, you can use full OpenSSL, or DropBear, or several other even smaller, congurable encryption libraries. Linux was just made for doing this kind of thing. As a historical footnote, I seem to recall todays bling-rich AJAX web techniques sprouting from some pretty humble origins: remote sensing geeks looking to save solar power. With Ajax, rather than just responding to client requests, the server itself could initiate communication. So, it could just sleep until something urgent awakened it, rather than continually having to power up just to respond to remote polling. Awakening and going to sleep again represent the vast majority of the power budget in many systems of this type, so it was a pretty big advance. But AJAX was great for other low-powered devices, too, because the more the processing burden could be shifted from server to client, via Javascript, the more responsive the user interface could be made to feel. Best Practice: If your device is connected (what device isnt, these days?) consider an OS such as Linux that will offer you a lot of options, both for configuration and driver support. Nearly universal hardware support may increase your purchasing power, letting you source the cheapest parts for each production run. #2 - LINUX HAS MULTIPLE DISPLAY OPTIONS With Linux, when it comes to the display, you have options. For the tiniest system, you might just put the console on a serial port. For deeply embedded systems, theres a kernel framebuffer that abstracts the video hardware... letting you write even very minimalist GUI layers without marrying any one specic hardware provider. Theres even hardware acceleration, via directfb. Moving up in functionality, theres SVGALib, which has at times been popular with gaming software authors. Simple Direct Layer (SDL) is another step up that has been trending upward in recent years. Finally, if you need the utmost in graphic performance, you can just use Xorg; that opens the door to doing everything on a device that you can do on a PC. This kind of exibility helps Linux suit a variety of product types, and is especially useful where a line of products must span a range of price points and functionalities. Best Practice: Along with the processor, the display is the most expensive component in a device. Consider an OS such as Linux that will give you some options, including choosing the lowest-cost parts, and in some cases addressing a wide range of price points with a great deal of shared code.
10
The History of Embedded Linux & Best Practices for Getting Started
A Divergence of Purpose
So product companies want one thing stability and long product life-cycles -- and open source developers (including Linux developers) want rapid progress. When these goals collide, embedded product companies may feel like theyve hitched their wagon to a rocket-ship. Sure, you get to take advantage of all the new features and performance and especially new hardware support that arrive with each (micro-)evolutionary step. But instead of just getting a new release from your RTOS vendor every other year, with Linux, there is a steady ow of releases you have to decide how to manage. Due to the desire of developers to make Linux the best of its kind, the application programming interface (API) undergoes perpetual, shall we say, evolution. Its hard to think of a single Linux subsystem -- be it sound or video or wireless networking or USB or peripheral interface detection and naming or scheduling or... you get the idea that has not gone through at least one or two major rewrites in the last decade -- let alone over the whole course of Linuxs 20-plus years. Every time Linuxs APIs change, you have to change your code, too. So, maintaining your applications on Linux may require more work than on less dynamic OSes. Its a lot of work to keep
11 The History of Embedded Linux & Best Practices for Getting Started
up yet doing so is the best way to ensure good security, and to be able to get support from kernel.org. Linux hackers may not be interested in xing a bug in some very old version that few people use any longer. But, if you nd a fault or regression in the current release, they will trip over themselves to solve the problem! Best Practice: Using the newest releases of any open source software gives you the best chance of getting help from the people who know the code best the authors themselves! And, since most people try to keep fairly current, you may also get more help from those who, like you, are implementing it.
Managing Up
Many open source adopters nd that pure, pristine upstream code comes very close to tting their needs yet may fall short in a few areas. It is no different with the Linux kernel. Luckily, the source is available, so with a few hacks here and there, your product requirements may all be met. The downside is that such hacks will have to be ported forward for the foreseeable future. There is another possibility. If you are willing and able to work with upstream developers, you might be able to get your hacks merged. If so, they will likely be maintained by others. If youre talking about a simple bug x, its denitely worth a try. Just submit it upstream. If its a new feature, the sell may be tougher. However, in recent years, the Linux kernel team has been pretty gung-ho to accept patches from embedded developers. People tend to blame or credit Andrew Morton. Around 2007, the noted Linux Maintainer began stumping for increased embedded participation. See http://goo.gl/Yo7OI and http://goo.gl/OU9gT. Mortons talks should be required reading for those considering submitting patches or features to kernel.org. Hes full of great suggestions, like (these are paraphrased): Its okay to use commercial Linux in consumer products. But prefer generic kernel.org where the customer may reasonably be expected to upgrade the OS themselves If your team and/or company is going to work with kernel.org, designate a single point of contact Working with kernel.org will make your code better, because itll be massively peer-reviewed by kernel.org developers, and massively tested For the best chance of being taken seriously, be sure to follow the knit-picky coding style guide delineated in the kernel sources, or here: http://www.kernel.org/doc/Documentation/CodingStyle . If you dont like your tabs eight spaces wide (maybe youre on a laptop) try setting your editors tab width to something else, and then using indent to reformat prior to submitting. Also, before joining the lkml mailing list, don your asbestos undies, as the culture favors merit over civility, for sure. In fact... Best Practice: Managing your non-differentiating patches upstream into the Linux kernel itself is often a great idea. However, it is not something to be taken lightly. Go in prepared for whats ahead. A good place to start might be Jon Corbetts paper. And come to think of it, now would be a pretty good time to drop the big one:
12
The History of Embedded Linux & Best Practices for Getting Started
If you write something, you are probably going to have to maintain it. Whether it be a simple patch, or a complete application, youre taking on an ongoing responsibility, like a monthly recurring cable TV charge. Each time you wish to upgrade the open source parts of your stack, you will have to rst port your patches forward to match the new release. That can involve a lot of relatively uninspiring work. Much better to stick to non-recurring charges, if you can. Use your time to incorporate pristine upstream code that is likely to be maintained by someone else. And that, arguably, was the great lesson commercial Linux vendors taught everyone in the mid-Otts: Stop carrying all those patches!
Binary Blues
One thing that is unusual about Linux, among operating systems, is that it comes with absolutely no guarantee of backward binary application interface (ABI) compatibility. That means you cannot count on binaries built for one kernel working on the next, no matter how minor the patch level. They might run ne. Or they might not. This results from a design philosophy aimed an letting Linux evolve quickly, rather than being tied to possibly inefcient methods from the past. Thus, in terms of meeting the goals of open source programmers to build the best thing of its kind the lack of backwards ABI compatibility is a great thing that truly frees Linux developers to innovate free from inefcient encumbrances from the past. Meanwhile, many chip companies do not publish the source code, due to legal fears. Instead, they issue binary-only drivers from time to time, typically built against the kernel versions adopted by the largest Linux distributors, e.g., the commercial embedded Linux distributions and the big desktop Linux distros such as Red Hat and Ubuntu. So, the problem is clear. You may wish to upgrade to a newer Linux kernel, for the reasons cited above. However, you may have some drivers that are tied to an older kernel build. Your options, then, are limited. You can try to back-port the features you need from the newer kernel. Or, you can approach your hardware supplier(s) and or your suppliers other customers in hopes of getting your hands on a driver built for a newer kernel release. This situation is a real set-back, reminiscent of the days when device builders relied on an RTOS supplier for drivers. Yet, it is the reality on the ground for many device engineers. If you have multiple binary drivers, the situation is compounded. A newer build of one may be available, but unless you can get the newer one upgraded as well, you may still be stuck with the old release. Or, perhaps you will have to spend some time reverse-engineering the binary driver, and writing a wrapper for it, so it can use the newer ABI. Best Practice: When possible, try to minimize the number of binary drivers in your stack. When not possible, try to engage with the community to see how others handle the lack of current drivers.
compiler turns human-readable code into machine-readable binary (1s and 0s) object code. Building even a simple application may produce quite a few binary objects, because C code typically includes external resources, like the C libraries. Thats where the linker comes in. The linkers job is to put objects together so they can nd and work with one another. The linker can statically link everything into a single, big, self-contained binary object. Or, it can create an object capable of dynamically linking pre-built objects elsewhere on the lesystem. The location of such objects is typically specied at conguration time. On Windows, pre-built objects are known as DLLs, or dynamic-link libraries. On Linux, they are called shared objects. The good ol GNU C libraries supply a good many of the shared object les on a typical Linux box: 291, it appears, on my fairly recent Ubuntu box. Best Practice: Dynamic linking works best when the same toolchain that is building your app was also used to build any shared objects linked by your app. Since the GNU C libraries comprise the most commonly linked objects on most Linux systems, it probably makes good sense to upgrade your toolchain and C libraries together.
Buildroot sometimes wins praise for its forbearance from feature-bloat. It gets a lot done, but aims to do so by leveraging pure, upstream versions of other open source projects such as Kcong. This epitomizes one of the core early design philosophies behind Unix: orthogonality. Dennis Ritchey (may he RIP) believed tools should be orthogonal, like the separate 2D views in a mechanical drawing. Each perfects me rst (oh sorry, thats Dr. Bronner again. How does he keep getting in here?). Each does one thing, and does it well. Yet, each combines easily with other tools, for example, through Unix pipelines. At the risk of stretching the mechanical drawing example too far, any three adjacent and equally spaced 2D views in an orthogonal drawing can be extended, at 0, 45, and 90 degrees respectively, to create an accurate 3D projection.
OpenEmbedded in March, 2011. Compared to OE, Yocto aims a little higher in the stack. Besides just the tools, the project is taking ownership of quite a few actual bitbake recipes, for many common architectures. Besides making OE easier to use, this could help reduce fragmentation, Yocto proponents say. Best Practice: If your company creates many products with shared hardware, interfaces, or software features, standardizing on an in-house distribution makes sense. Specialized tools like OE/Yocto stand to offer considerable convenience, as well as conformance with industry norms in library and tool selections. There is always risk in basing an in-house distribution on an external project like Yocto. However, Yocto appears to enjoy substantial backing.
Filesystems
The Linux kernel supports many lesystems, including some intended specially for use on the ash memory typically used in embedded devices. Flash was named for the way camera ash bulbs work: when its time to erase ash to turn the little 0s back into 1s a great deal of electrical energy must be stored up and then released in a burst, in order to overcome static pressure. As you might expect, this technique results in the eradication of a fairly large number of 0s in fact, a full blocks worth. Modern NAND ash device blocks might be 256K or even larger. That means in order to change one character in a log le, you have to rst copy 256K (with the changed character), and then blast away the stale data. All this bursting and blasting takes its toll, and a given erase block can endure being ashed only so many times. To prevent ash from wearing out quickly, many ash lesystems attempt to wear-level; that is, to spread erasures evenly among all the blocks. The traditional gure cited was 100,000 erase cycles. That seems to have increased by an order of magnitude or more, at least in marketing literature. Flash can be loosely divided between NOR or NAND approaches, named for the respective bitwise techniques used to read data. NOR is the old, expensive, relatively reliable kind used mainly in industrial applications. NAND is cheaper, denser, and typically found in consumer devices. Traditional embedded systems based on Linux typically used a small amount of expensive NOR ash to store critical things, like the OS. Where larger capacities were needed, some cheaper NAND would also be present. Both types would typically connect as raw memory devices; that is, any wear-leveling would have to be handled by the Linux kernel and associated lesystems. A few of Linuxs earliest ash lesystems, CramFS and SquishFS, do not bother with wear-leveling.
16 The History of Embedded Linux & Best Practices for Getting Started
They do not really have to, since they are essentially read-only. Written by Linus during his days working for chipmaker Transmeta, CramFS and SquishFS essentially just shake out les at boot time and uff them up onto a RAMdisk the difference being in how tightly the les are serialized when stored. An aside: Raw ash devices are not to be confused with ash memory gadgets like USB keys, SD cards, CompactFlash, and the like. Such devices have an on-board memory controller, typically running on an ARM7 core, that handles wear-leveling being the scenes. Such devices attach as normal block devices, and act as hard drives (albeit slow ones). This approach was invented by M-Systems, which marketed the rst such devices as disk-on-chips. The Linux kernel supports several ash lesystems that do handle wear leveling for Flash devices connected via Linuxs MTD (Memory Technology Device) interface. They can be found under Miscellaneous Filesystems in menucong. Through the years, each has waxed and waned in popularity: JFFS2, LogFS, and UBIFS are those present in the current kernel. JFFS2 has long been popular for fairly small ash partitions. LogFS is said to suit really large ash devices; however, it is sometimes described as a journal without a lesystem, so depending on your application, it could use more processing power. Anecdotally, UBIFS seems to be chalking up many design wins. Another mature option is YAFFS2, which is maintained outside the kernel tree, and also sees use with other RTOSes. It has been around for a long time, and has seen many design wins. Best Practice: Many device developers benchmark available flash filesystems with their specific application. And, considerable work often goes into tuning filesystem parameters, in order to achieve the greatest performance and reliability. Increasingly, high-end devices like smartphones are using a relatively new storage technology known as eMMC. Along with copious volumes of high-density NAND storage, eMMCs employ their own on-board memory management controller (MMC). The MMC lets the eMMC connect to Linux as a block device, similar to a USB key, SD card, or hard drive. The MMC uses a particular ash memory vendors preferred, proprietary wear-leveling techniques. Typically, you would format your eMMC device with a normal block lesystem, like ext4. On the one hand, as a developer, this is great, because the complexities of ash wear-leveling are wholly abstracted and cloaked by the eMMC. On the other hand, tuning your system for optimal stability and performance may involve trial and error, as your storage device is literally and guratively a black box. Yet, eMMCs seem to be where portions of the device industry smartphones, for example are heading.
the zillions of ARM-based SoCs out there. Many in the embedded Linux world no doubt wish this company well! Once you decide on your source, you still have to pick packages. You may also at times have to bring in extra-curricular packages Universe items, in the language of Ubuntu. Every time you bring something into your stack, it may be worth considering its long-term viability, roadmap, and ultimately, its alignment (or not) with the goals of your project.
License Tracking
Quite a bit of open source software is available under multiple licenses. BSD, MIT, GPL, take your pick. That can simplify things a great deal. However, it is not enough to just look at the root-level project license, and assume it applies to all the code in the tree above. Where things really get complex is when open source software projects themselves draw from multiple other open source projects. As a result, the code in various subdirectories typically will contain additional license stipulations. Just collecting all the licenses in once place is a serious challenge! And that challenge, it turns out, is one where developers can make a huge difference. More than anyone else, developers are in the source code. Their job should not be to try to interpret the law. But increasingly, developers using open source may be tasked with creating a license manifest basically a list of all the licenses they run across. If youre lucky, your company may have a legal team to help evaluate licenses. Often as not, though,
18 The History of Embedded Linux & Best Practices for Getting Started
at least some part of open source license evaluation falls to developers, according to surveys at LinuxDevices.com as recent as 2007. The challenge posed by open source license management led to the launching of several startups, including Black Duck and Palamida. Subsequently, open source packages like HPs well-publicized FOSSology came along. Of course, each took slightly different technical approaches, and each devised slightly different document formats. One ray of hope, albeit still a very young one, is the new SPDX standard. Among other promises, it could provide some standard formats and approaches for open source license management.
Conclusion
So, there you have it. Linux achieved world domination in devices thanks to technical strengths, good management, a great license, portability, source availability, royalty-free marketing, product and service support from commercial partners, and a host of other factors. While embedding Linux is not without challenges, at this very instant, there are probably more running instances of Linux than any other operating system. And with the rate of development that Linux continues to enjoy, its only going to get better.
19
The History of Embedded Linux & Best Practices for Getting Started
Distribution-Flexible
The Linux Foundations courses are built to be distribution-exible, allowing companies or students to easily use any of the big three distribution families: Red Hat/Fedora, Ubuntu/Debian, SUSE/ OpenSUSE. If your company runs one of these Linux distributions and needs an instructor who can speak deeply on it, we have a Linux expert who knows your distribution well and is comfortable using it as the basis for any corporate Linux training. For our open enrollment students who take our online training or classroom training, our goal is to help them, rst and foremost, to become Linux professionals, rather than focusing on how to use one particular set of tools.
Technically-Advanced
The Linux Foundations training program has a clear advantage. As the company that employs Linux founder Linus Torvalds, we are fortunate in our ability to leverage close relationships with many of the top members of the Linux community, including Linux kernel maintainers. This led to the most comprehensive Linux training on the market, delivered through rigorous ve-day courses taught by Linux experts who bring their real world experiences to every class. Since Linux is always evolving, our course materials are regularly refreshed and up-to-date with stable versions of the Linux kernel. We deliver our advanced Linux training in a 50/50 training format, where 50 percent of a students time is spent learning from an instructor and the other 50 percent doing exercises in hands-on learning labs. For more information about our Linux training, please visit training.linuxfoundation.org and contact us today.
20
The History of Embedded Linux & Best Practices for Getting Started
The Linux Foundation promotes, protects and standardizes Linux by providing unied resources and services needed for open source to successfully compete with closed platforms. To learn more about our Linux Training program, please visit us at training.linuxfoundation.org.