You are on page 1of 13



Design of a Lightweight TCP/IP Protocol Stack
with an Event-Driven Scheduler*

School of Computer Science and Engineering
Seoul National University
Seoul, 151-744 Korea
School of Computing
Soongsil University
Seoul, 156-743 Korea
Electric Engineering and Computer Science Department
South Dakota State University
Brookings, SD57007 USA

The traditional TCP/IP protocol stack is associated with shortcomings related to the
context-switching overhead and redundant data copying. The software-based TOE (TCP/
IP Offload Engine), also known as lightweight TCP/IP, was developed to optimize the
TCP/IP protocol stack to run on an embedded system. In this paper, we propose the de-
sign of a lightweight TCP/IP protocol stack that runs on an event-driven scheduler. An
event-driven scheduler is one of the main components of a real-time operating system
that provides essential functionalities for an embedded system in network communica-
tion. We discuss the problems involved in designing a lightweight TCP/IP with an event-
driven scheduler, especially for the issues of TCP transmission and TCP retransmission.
We implemented and evaluated the proposed TCP/IP stack on an embedded networking
device and verified that the proposed TCP/IP stack is well suited for high-performance
networking in embedded systems.

Keywords: TCP/IP, TCP/IP offload engine, embedded system


With the rapid growth of wired/wireless networks, embedded systems are required
to have a high capability for network communications. Several studies have been done to
accelerate the speed of packet processing in embedded devices that are uniquely applica-
ble to network communications. Considering the communication environments of the
embedded devices, these studies focus on enhancing the TCP protocol [1] and optimizing
the TCP/IP implementation [2].
TOE (TCP/IP Offload Engine) [3, 4] refers to a type of implementation of a protocol
stack that is optimized for embedded devices. Although general TOEs are implemented
in hardware [2, 5], in some cases TOEs are implemented in software [6, 7]. Software-
based TOEs have high degrees of flexibility and a low cost compared to hardware-based
TOEs. A protocol stack implemented in a TOE is designed to support more efficient
packet processing by overcoming the shortcomings of the traditional TCP/IP protocol.
Received May 31, 2011; accepted March 31, 2012.
Communicated by Junyoung Heo and Tei-Wei Kuo.
This research was supported by Basic Science Research Program through the National Research Foundation
of Korea (NRF) funded by the Ministry of Education, Science and Technology (2011-0027454) and by Min-
istry of Culture, Sports and Tourism (MCST) and from Korea Copyright Commission in 2011.

We also define the problems involved in the TCP transmission and re- transmission on an event-driven scheduler. one. is the elimination of the overhead of managing complex data structures and copying re- dundant data packets between software layers in the traditional design of the network protocol stack [8-11]. which is designed to integrate software layers.1060 JOONHYOUK JANG. run on real- time operating systems. With this feature. and in section 6. In section 5. which runs on an event-driven scheduler in a real-time op- erating system. Because real-time operating systems do not support the full func- tionality of general operating systems. and they all maintain a specific form of data structure. consuming memory space and processing time while dealing with data packets. This paper proposes the design of a lightweight TCP/IP protocol. For example. a TCP connection processes a bit stream. we de- fine the execution environments associated with the problem. Because the layers are not separated strictly. In this way. or two. the software-based TOE architecture is custom- ized to the real-time operating system in which runs. aka lightweight TCP/IP. The most remarkable feature of a software-based TOE. SOFTWARE-BASED TCP/IP OFFLOAD ENGINE The traditional TCP/IP protocol is designed as a layered architecture. Each of the software layers handles a data unit of its own form as used in the specific protocol and manages the data structure to maintain the states of the data unit. SHIN Software-based TOEs. sometimes referred to as lightweight TCP/IPs. the program binaries occupy less storage space and the communication between the layers can be simplified. while the IP protocol processes the datagram. Zero-copy packet transmission and reception can be achieved by integrating applications and network drivers together into a unified buffer management scheme. the overhead of context-switching slows down the packet processing. More- . JINMAN JUNG. we evaluate our implementation. The rest of this paper is organized as follows. we conclude this paper. In particular. as the layers or the network protocols are executed in separate processes. data structures can be simplified and the number of redundant. It provides the layers with unified methods of accessing a shared buffer space to allocate and de-allocate packet buffers. In an actual implementation case. This design concept is not suitable for running on an embedded device because the resource limita- tions of an embedded device-size of binary executable and memory space-are not con- sidered. When transmitting data packets. we discuss the problems and their solutions as regards TCP transmission and re-transmission in the exe- cution environments described in section 3. 2. In section 4. In section 3. we introduce previous works on software-based TOEs or the lightweight TCP/IP protocol. Another feature of lightweight TCP/ IP is its software architecture. which is divided into segments. In addition. time-consuming data copying events is reduced to zero. This occurs in reverse order when receiving data packets. Otherwise. each layer copies data from the packet buffer in the upper layer to its packet buffer and transfers the data to the lower layer after processing the data packets. there are many more software layers. they can stick together or flexibly take charge of the functionalities of the network protocols. SANGHOON CHOI AND SUNG Y. YOOKUN CHO. In section 2. data copying between the layers is the most time-consuming process in the network protocol stack. one or two instances of copying can be allowed to preserve the in- dependence of the applications and network drivers.

The initialization routine registers the main functions of applications and the TCP/IP protocol stack and when the scheduler executes them. Software architecture of the system with a representation of network layers and transmission/ reception flows. and a timer. Previous works in this research area introduced micro-IP [12]. lwIP implements the pBuf. and a real-time operating system (Fig. 3. NexGenIP [15] and NETX [16]. tinyTCP [14]. It pro- vides IP. Main program Transfer data Send packets Applications to applications Application Interface for layer network communication Frameworks Network layer Network Process packet Create packet (TCP/IP) layer reception tasks transmission tasks Porting layer Operating system MAC (S/W) Event-driven scheduler Create packet Process packet Timer reception tasks Transmission tasks EMAC driver MAC (H/W) Transfer data Receive packets to EMAC driver Hardware PHY Fig. the software can be executed in only one or two processes. the applications and TCP/IP protocol stack process their jobs by register- ing their functions as a task with the scheduler. Several stud- ies have been done to port lwIP onto real-time operating systems or to optimize a light- weight TCP/IP for an embedded platform [17-19]. and DHCP and is designed to support various real-time op- erating systems. structure which dynamically allocates and de-allocates packet buffers to increase the packet processing performance. DESIGN OF A LIGHTWEIGHT TCP/IP PROTOCOL STACK WITH AN EVENT-DRIVEN SCHEDULER 1061 over. Upon execution. The software em- bedded in the device consists of the main program. Putting a task into the wait queue in the schedule is implemented as registering a function pointer in it. lwIP [13]. The program is compiled in one binary file which is embedded in the system. Particularly. 1. This implies that the software components included in the binary file are executed in one control flow with a main function which calls them. the main program calls an initialization routine and the scheduler. open source software developed by SICS. UDP. . is one of the most well-known implementations of lightweight TCP/IP. resulting in more effi- cient network communication than in the traditional design of the protocol stack. OVERALL SOFTWARE ARCHITECTURE 3.1 Execution Environments The embedded system described in this paper is assumed to be a device dedicated to network communication. The scheduler executes tasks which are in the ready sate. The operating system is simplified. 1). lwIP. applications. a network device driver. ICMP. including an event-driven scheduler. TCP. a TCP/IP protocol stack.

2 TCP/IP Protocol Stack We implemented TCP. and one of two priorities is assigned to each. SANGHOON CHOI AND SUNG Y. JINMAN JUNG. Task creation and execution through the scheduler and timer. management. Task crea- tion is done in three ways. These protocols are the minimum requirements necessary to commu- nicate with other entities in an IP network [20]. UDP. A task can be created by an application task. Only the essential parts the network protocols are implemented in the TCP/IP protocol stack. IP. or by the timer in the operating system (Fig. ARP. creates tasks periodi- cally. and deletes a reserved task. ICMP. the TCP/IP protocol stack does not provide full functionality of the network protocols. SHIN The event-driven scheduler uses a non-preemptive prioritized scheduling policy.1062 JOONHYOUK JANG. and destruction  Checksum calculation  No TCP options except the MSS (Maximum Segment Size) configuration  Reordering of out-of-order segments in packet reception is not supported  Congestion control is not supported  Calculations like RTO (Retransmission Time Out) is replaced with constant values . YOOKUN CHO. It contains two wait queues. creates a task at a reserved time. 2). It notifies current time. The timer provides four functions. 3. by a network protocol task. and the Ethernet protocol in the TCP/ IP protocol stack. In addition. 2.  Packet transmission and reception  Multiple applications (port)  TCP window management for flow control  TCP connection establishment. A task is assigned a priority at the time of creation and is put into the corresponding wait queue. We briefly describe the functional specification of the lightweight TCP/IP protocol below. Applications TCP/IP protocol stack Task reservation Task creation Timer Task Reserved Wait queue(high priority) creation task Running Wait queue(low priority) Fig.

and packet reception tasks. 3. Because the first-level granularity maximizes the response time of the system. packet transmission tasks. This is simple to implement. 3 shows three levels of task granularity. or packet reception tasks. a packet transmission task is created and put into the wait queue in the scheduler. In this case. Fig. 5 protocol stack Create tcp_send() task (b) Second-level granularity. At the first level of task granularity. For an event-driven scheduler. number of tasks waiting in the queue increases . it is necessary to balance the execution time of application tasks. Fig. it is the most appropriate design for interactive communications. because this di- rectly affects the implementation methods and system performance. a packet transmission task can transmit only one packet.1 Task Granularity Tasks that are executed on an event-driven scheduler are classified as application tasks. Three levels of task granularity. b) Second-level granularity Packet transmission task Application tcp_send() tcp_send() tcp_send() Create task Packet transmission task TCI/IP protocol stack tcp_send() tcp_send() Creation task Packet transmission task (c) Third-level granularity. Hence. and the synchronization problem (dis- cussed later) is not serious. The granularity of the packet transmission tasks is the most important factor in balancing the tasks to improve performance of the system. packet transmission tasks. PACKET TRANSMISSON AND RECEPTION 4. Application Create tcp_send() task TCI/IP packet1 packet2 packet3 packet4 packet5 protocol stack (a) First-level granularity. However. 3 packet4. every time packet transmission is required. In these tasks. TCP/IP tasks. DESIGN OF A LIGHTWEIGHT TCP/IP PROTOCOL STACK WITH AN EVENT-DRIVEN SCHEDULER 1063 4. it is difficult to guarantee fairness in distributing the execution time in the manner of schedulers in general operating systems do. as the size of data increases. 2. it is important to define the granularity of the packet transmission task. a) First-level granularity Application TCI/IP packet1.

a packet transmission task can be run as a process in a system with a general operating system. With the second-level granularity. SANGHOON CHOI AND SUNG Y. 4). Moreover. as described in section 4. Given that the event-driven scheduler is non-preemptive. A system with second-level granularity can cope with the problem of task creation failure. SHIN rapidly. 4. the execution time of a packet transmis- sion task increases linearly as the number of packets assigned to it increases. a packet transmission task transmits multiple packets when executed. With third-level granularity. by creating a task that is identical to itself before it terminates. To support fast retransmission. tasks are created when an appli- cation task requests transmission or when data packets remain in the packet buffer. . task creation fails repeat- edly and the failures reduce the system performance greatly. when a TCP connection decides to retransmit a data Created tasks 3 DUP ACKs are processed Creating of packet transmission tasks /Processing packet reception tasks Actual transmission and reception A packet loss 3 DUP ACKs are received Fig. it is challenging to execute packet transmission tasks and packet reception tasks at the exact time on an event-driven scheduler. most retransmissions are triggered by a fast retransmission mechanism when three duplicated acknowledgements are received from a recipient [20]. thus. 4. In reality. However. the system performance is greatly decreased when a packet is lost. there can be multiple packet transmission tasks that are created (Fig. A disadvantage of second-level granularity is synchronization. the system can transmit a large number of data packets without degrading the performance. retransmission is described to be triggered by retransmission timeout (RTO). In this case. The state trans- mission occurs when duplicated acknowledgements are received.1. Though the effect of this problem is not great without packet losses. by controlling the number of packets to transmit assigned to a single task. JINMAN JUNG. a TCP connection is required to have additional states. With the third type of granularity. YOOKUN CHO.1064 JOONHYOUK JANG. If the size of the wait queues is not sufficiently large. With the first and the second types of granularity. Packet re- ception tasks cannot be executed in a short time if a transmission task is running. such as a daemon process. a packet transmission task is executed too frequently to break the balance with other type of tasks. a task managing packet transmission runs continuously.2 Fast Retransmission and Synchronization In TCP. the number of tasks increases much more slowly than the system with first-level granularity. the response time of the system increases. in the event-driven scheduler described above. During the time between the arrival of a packet and the execution of the packet re- ception task processing the packet. However. Fast retransmission and synchronization. In other words.

Considering the purpose of a target device. it terminates immediately.3 Retransmission Timer According to the TCP standard [20]. and packet reception tasks. However. much of the system performance depends on how we balance the ex- ecution frequency of the three types of tasks. 4. and destroy them at the time fast retransmission is triggered. DESIGN OF A LIGHTWEIGHT TCP/IP PROTOCOL STACK WITH AN EVENT-DRIVEN SCHEDULER 1065 packet by fast retransmission. this period and the task granularity described in section 4. and the timer stores the transmission time of the first packet in a sliding window that is not acknowledged by the recipient. These tasks significantly disturb the synchronization and consequently degrade the system performance. the packet transmission throughput can be maximized if we decrease the fre- quency. these being application tasks. packet trans- mission tasks. The first method is to store the IDs of the packet trans- mission tasks in the connection state. There are two methods of task cancelation. This method reduces the response time of the system.4 Dynamic Adaptation Because the event-driven scheduler does not guarantee fairness of the execution time among tasks. However. not by the retransmission timeout. the first packet in the sliding window is retransmitted and the subsequent data packets are processed in the same manner as fast retransmission.e. . We manage only one retransmission timer in a TCP connection. additional states increase the size of the data structure for the connection and the scheduler is re- quired to support the operation so as to destroy numerous tasks at one time. Oth- erwise.. a mechanism is required to cancel or delay the execution of out-of-order tasks. If the transmission speed is low enough only a few packets are sent in this way. The second method is to determine whether or not a task is out-of-order during the execution of the task itself. i. out-of-order packets are waiting to be sent in the wait queue. 4. If we increase the frequency of packet reception tasks. the packet reception throughput and the interactivity of the communication increases. When a timeout occurs. an absolute majority of detections and retransmissions of lost packets is performed by the fast re- transmission mechanism. a TCP connection maintains a retransmission timer for each packet and retransmits it when a timeout occurs. In addition. it is not cost-effective to manage a timer for each data packet for timeout events that rarely occur in terms of time and memory space. If a currently running task is out-of-order. This method does not require much structural modification. Thus. the data packet with the lowest sequence number among all unacknowledged data packets. but progressively more out-of-order tasks remain in the wait queue as the transmission speed increases. but the improvement in the re- sponse time is smaller than in the first method because the decision is made after the task has been executed by the scheduler. Packet reception tasks are created periodically with a default period of 1 millisecond.1 can be controlled to maximize the performance of the system. To receive the acknowledgement of the retransmitted packet from a recipient. we dynamically adapted the execution frequency of packet trans- mission tasks and packet reception tasks at runtime by measuring the number of trans- mitted packets and received packets and observing the state transition of the TCP con- nection.

(b) EMAC transmission throughput. YOOKUN CHO.000 packets and changed the number of packets from 1. 5). . Fig. Table 1 shows the experimental environments.000. JINMAN JUNG. 5. In an identical environment.000 packets. PERFORMANCE EVALUATION 5.000.16GHz Memory 1 GB Ethernet Intel® PRO / 1000 PM Network Embedded device Operating system None Processor 1GHz TMS302C455 Memory DDR2 512 MB Ethernet EMAC First. we compared the transmission throughput of TCP/IP with that of UDP/IP. TCP/IP transmission reached 296 Mbps and UDP/IP transmission reached 326 Mbps. Each packet transmission task transmits five packets. SHIN 5. we measured the maximum throughput at the EMAC driver level without con- sidering the receiver’s reaction.000 to 20. and our own code inserted into the program. To ensure the correctness of the measurement. Experimental environments. iPerf. SANGHOON CHOI AND SUNG Y. EMAC Tx throughput EMAC Tx throughput 500 500 480 480 460 460 440 440 Throughput(Mbits/sec) Throughput(Mbits/sec) 420 420 400 400 380 380 360 360 340 340 320 320 300 300 0 500 1000 1500 0 2 4 6 8 10 12 14 16 Packet Size(Byte) No. The maximum through- put at the EMAC driver level was 426 Mbps. We changed the packet size from 128 to 1500 bytes with 100. Table 1.1 Throughput We implemented the proposed lightweight TCP/IP protocol stack on a board embed- ded with TI’s TMS320C6455 DSP. we used three different tools for network analysis: Wire Shark. This represented the upper bound in this experimental environment (Fig. Host Operating system Microsoft Windows XP Processor Intel® Core™2 T7400 @ 2. EMAC transmission and reception throughput according to the packet size (a) (c) and the number of packets (b) (d). and UDP/IP transmission uses a protocol stack identical to that of TCP/IP transmission apart from the TCP layer. Transmitting 1.1066 JOONHYOUK JANG. of packets(K) (a) EMAC transmission throughput.

Fig. To compare the proposed lightweight TCP/IP protocol stack with the traditional TCP/IP protocol stack. When the num- ber of data packets for a task was less than 5. Comparing TCP/IP transmission throughput (a) with UDP/IP transmission throughput (b) according to the number of packets. of packets(K) (a) TCP/IP transmission throughput.5 times faster than that of TCP/IP protocol stack with Linux. 6. DESIGN OF A LIGHTWEIGHT TCP/IP PROTOCOL STACK WITH AN EVENT-DRIVEN SCHEDULER 1067 EMAC Rx throughput EMAC Rx throughput 500 500 480 480 460 460 440 440 Throughput(Mbits/sec) Throughput(Mbits/sec) 420 420 400 400 380 380 360 360 340 340 320 320 300 300 0 500 1000 1500 0 2 4 6 8 10 12 14 16 Packet Size(Byte) No.1 on the transmission throughput of the system (Fig. 5.2 Task Granularity and Task Cancelation We measured the impact of the task granularity described in section 4.2 to 1. the transmission throughput changed according to the size of the wait queue in the scheduler as too many tasks were created. The transmission speed of the proposed lightweight TCP/IP protocol stack reached 69% of the upper bound using EMAC and 90% of the UDP/IP transmission (Fig. In this case. 5. the transmission throughput using the socket interface reached about 80 Mbps and our implementation reached 95 Mbps. The trans- mission throughput of the proposed lightweight TCP/IP protocol stack was 1. (b) UDP/IP transmission throughput. 7). the transmission throughput changed from 10 to 296 Mbps. (d) EMAC reception throughput. DSP TCP Tx throughput by TCP/IP Tester DSP UDP Tx throughput by TCP/IP Tester 400 400 350 350 300 300 Throughput(Mbits/sec) Throughput(Mbits/sec) 250 250 200 200 150 150 100 100 50 50 0 0 0 100 200 300 400 500 600 700 800 900 1000 0 100 200 300 400 500 600 700 800 900 1000 No. The size of the wait queue . With the first and the second levels of granularity. the transmission throughput was less than 250 Mbps. of packets(K) (c) EMAC reception throughput. of packets(K) No. Fig. As the size of the data increased. 6). (Cont’d) EMAC transmission and reception throughput according to the packet size (a) (c) and the number of packets (b) (d). we ported our implementation to a Linux machine using a raw socket.

transmission throughput was stable. was varied from 256 to 4096. . the optimal point can differ according to the execution environment because the available size of the packet buffer and the wait queues are af- fected by the memory space of the device. When the number of data packets for a task exceeded 10. Finally. When it exceeded 50. which a task creates itself before termination. but buffer overflows frequently occurred and the throughput decreased sharply in the event of a packet loss or when the transmission was delayed. Because the packet transmission tasks were executed too frequently. the system did not show the expected level of performance. When we used the task cancelation mechanism. Transmission throughput according the number of data packets for a task. With repetitive experiments. we proposed the design of a lightweight TCP/IP protocol stack that runs it on an event-driven scheduler of the type used in real-time operating systems and analyzed its characteristics. or lightweight TCP/IP. of packets per task Fig. the packet reception tasks and application tasks could not easily be executed. 6. SHIN 400 350 300 Throughput(Mbits/sec) 250 200 150 100 50 0 0 10 20 30 40 50 No. We implemented the proposed lightweight TCP/IP protocol stack on an embedded system and evaluated it to confirm that the pro- posed design can suitably process network communication efficiently on an embedded system. However. we captured the packet sequence and traced the retransmission process after a packet loss occurred. the recovery was 5 to 10 times faster than it was without. SANGHOON CHOI AND SUNG Y. we measured the difference in the delay between the exe- cution with task cancelation and that without task cancelation. As it was not easy to measure this at runtime. In this paper. designs and implementations that are in- tegrated with the software architecture considering its platform and operating system are required to optimize the network communication performance on an embedded system. we found that the transmittance of 5 to 20 packets was the best for a task. Lightweight TCP/IPs tend to be designed to achieve platform independence so as to be operable on various platforms. However. JINMAN JUNG. 7.1068 JOONHYOUK JANG. With the third level of granularity. We intro- duced three levels of task granularity for an event-driven scheduler and discussed the synchronization problem associated with packet transmission and reception which can occur in the TCP retransmission process. the transmission throughput reached its maximum throughput level. re- searchers have improved the design of the traditional TCP/IP protocol stack to optimize the system performance for use in embedded systems. YOOKUN CHO. CONCLUSION With the software-based TCP/IP Offload Engine (TOE).

G. “Design. 30-33. pp. 1998. “Introduction to TCP/IP of- fload Engine (TOE). and D. “tinyTCP. P. L. V. J. pp. Yoon. H. “Sockets vs. Vol.” Journal of Parallel and Distributed Computing. Zheng and Y. H. Panda. H. E. Yoo.nexgen-software. “A lightweight and high-performance TCP/IP stack for Topsy. Shah. 1280-1285. Chung. 1348-1365. Booth. Vol. Yeh.” in Proceedings of IEEE INFOCOM. Khalid. pp. P. 009-018. http://www. 1871-1883. and Y. 65. Salah. K. W. Vol. Vol. Jie. D. 1. A. “Exploiting NIC architec- tural support for enhancing IP-based protocols on high-performance” Journal of Measurement Science and Instrumentation.” Journal of Real-Time Systems.unusualresearch.” Journal of Information Science and Engineering. C. 2011. 1998. 4. 2002. T. and S. Q. L. 2005. Kim. “micro-IP. “The design and implementa- tion of zero-copy for linux. Teck. “Improving TCP/IP performance over third generation wireless networks. 28. Song. 2004. 18. Dunkels.” Software  Practice and Experience. pp. “Design and Implementation of embedded network based on 30. pp. 19. J.” in Proceedings of IEEE Wireless Communications and Networking. X. Tianhua. 16. ETH Zuurich. Wu and H. RDMA interface over 10-Gigabit Networks: An in-depth analysis of the memory traffic bottleneck” in Proceedings of Workshop on Remote Direct Memory Access: Applications. 9. J. Jin. pp. 2006. H. 245-250. 8. and D. S. Vol. Panda. 17. 2008. K. http://jnlp. Vol. and B. 2011.htm. L. K. 2007. 430-441. Vol. Ramjee. W. Agrawal. “Performance analysis and comparison of interrupt-handling schemes in gigabit network. Li. J. 3425-3441. and Z. Kim. 7.” http://www. 27. 7. Balaji. Z. M. Gervais. 2001. pp. Yuan. 3. Mannem. 12.” Swedish Insti- tute of Computer Science. pp.sourceforge. Vol. 749-772. Chan and R. NETX. 2003. NexGenIP. H. “Design and implementation of the lwIP TCP/IP stack.pjort. P. “Light-weighted internet protocol version 6 for low-power wireless personal area networks.” Computer Engineering and Network Laboratory. 2. 2010.” in Proceedings of IEEE Interna- tional Symposium on Consumer Electronics. “LyraNET: A zero-copy TCP/IP protocol stack for em- bedded systems. 34. 11. pp. Steenkiste. 13. DESIGN OF A LIGHTWEIGHT TCP/IP PROTOCOL STACK WITH AN EVENT-DRIVEN SCHEDULER 1069 REFERENCES 1. V. 5-18. Schweikert. and A. Choi. 5. and evaluation of a single-copy protocol stack. pp. K. pp. 10. 3. Chao. Chen.” Computer Communications. Z. Chiang and Y. “Design and implementation of TCP/IP offload engine system over gigabit ethernet.” http://www.” in Proceedings of the 15th International Conference on Computer Communications and Networks. 2008. 1-4. C. Y. Balaji.10gea. C. Kwon. M. 14. and F. P. and Technologies. H. S. S. L. . Hongfeng. 6. I. “Implementation of a software-based TCP/IP offload engine using standalone TCP/IP without an embedded OS. C. Z. 2006. Chuansheng. Ananda. El-Badawi. “A lightweight protocol for wireless sen- sor networks.” International Journal of Computer Information Systems and Industrial Management Applications.” http://www.

Korea. Joonhyouk Jang received his B. Korea. where he is cur- rently a Professor.D. Seoul. embedded sys- tems. He is currently a Ph. degree in com- puter science from the University of Minnesota. html. Yookun Cho received the B. Korea.D. in 1978. system security. His research interests include operating systems. Minneapolis. student of School of Computer Science and Engineering. mobile communications. In 1985. algorithms. Seoul. YOOKUN CHO. . embedded systems. SHIN 20.S. student of School of Computer Science and Engineering. and fault-tolerant computing systems. Seoul. degree in Computer Science from Seoul National University. He also served as the honorary conference chair for ACM SAC 2007. Korea.” RFC 793. Seoul. in 2009. He is currently a Ph. J. Seoul National University. in 2008. Since 1979.faqs. he was the President of the Korea Information Science Society. Seoul National University.S.E. Korea. degree in Computer Science from Seoul National University. Seoul. http://www. JINMAN JUNG. 1981. Jinman Jung received his B. in 1971 and the he was a Visiting Assistant Professor with the University of Minnesota. degree from Seoul National University. Seoul National University.D. and fault-tolerant computing sys- tems.1070 JOONHYOUK JANG. SANGHOON CHOI AND SUNG Y. and from 2001 to 2002. he has been with the School of Computer Sci- ence and Engineering. Postel. “Transmission control protocol. His current research interests include operating system. His cur- rent research interests include operating system. and computer security.

S. Sung Y. DESIGN OF A LIGHTWEIGHT TCP/IP PROTOCOL STACK WITH AN EVENT-DRIVEN SCHEDULER 1071 Sanghoon Choi received his B. and system software. Korea. Software Fault Toler- ance. degree in Com- puter Science from University of Wyoming. He has authored/coauthored over 130 technical peer reviewed papers in Software Engineering. He worked as a visiting scientist for the Space and Life Science Division at NASA Johnson Space Center in Houston. He is cur- rently a Ph.D. He was a Conference Chair of ACM SAC 2007. Korea. embedded systems. . Seoul. Seoul.D.S. in 2011. His current research interests include operat- ing system. Shin received the M. and Ph. Soongsil Univer- sity. WY in 1984 and 1991 respectively. He is a professor and Graduate Co- ordinator of Computer Science at South Dakota State University since 1991. 2009 and 2010. student of School of Computing. He had served as a Vice Chair of ACM SIGAPP from 2005 to 2009. Telemedicine. TX from 1999 to 2002. degree in Computer Sci- ence from Soongsil University. and Medical Image Processing and GIS area. Laramie. and he is currently serving as a chair of ACM SIGAPP.