Professional Documents
Culture Documents
HCIA-HarmonyOS Device Developer V1.0 学员用书
HCIA-HarmonyOS Device Developer V1.0 学员用书
HCIA-HarmonyOS
Device Developer
student book
Version: 1.0
1
Machine Translated by Google
Without the written permission of our company, no unit or individual may excerpt or copy part or all of the contents of this document, and
Trademark Statement
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks or registered trademarks mentioned in this document are the property of their respective owners.
Notice
The products, services, or features you purchase shall be subject to Huawei’s commercial contracts and terms. All the information described in this document
Or some products, services or features may not be within your purchase or use scope. Unless otherwise agreed in the contract, Huawei
The Company makes no representations or warranties, express or implied, regarding the contents of this document.
Due to product version upgrades or other reasons, the content of this document will be updated from time to time. Unless otherwise agreed, this document only
All statements, information and recommendations in this document are provided as a guide for use and do not constitute a warranty of any kind, either express or implied.
address: Huawei Headquarters Office Building, Bantian, Longgang District, Shenzhen Postcode: 518129
URL: http://e.huawei.com
Machine Translated by Google
Table of contents
2.5 Use DevEco Device Tool for project management...................................... ................................................................. .............................twenty three
2.6 Source code compilation for two platforms........................................ ................................................................. ................................................................. .............28
4.4 Introduction to IoT proprietary hardware service subsystem driver........................................ ................................................................. .............................97
5.2.1 Components, chip solutions and product solution configuration rules................................ ................................................................. ........................105
5.4.1 Native application development steps for LiteOS-A kernel KV storage...................... ................................................................. .............122
5.4.2 Dump system properties in LiteOS-M kernel platform usage guide................................. ................................................................. .............124
5.4.3 Dump system properties in LiteOS-A kernel platform usage guide................................ ................................................................. ............124
5.5.5 Manufacturer’s application integration OTA capabilities................................ ................................................................. ................................................................. ... 128
5.6.2 appspawn starts the boot component...................................... ................................................................. ................................................................. . 135 5.6.3 bootstarp
service startup component...................................... ................................................................. ................................................................. .... 135 5.6.4 Syapara system
6.2 Static registration Shell programming example...................................... ................................................................. ................................................................. .... 151
6.3 Dynamic registration Shell programming example...................................... ................................................................. ................................................................. .... 152
1.1 Preface
The operating system is an important part of the Internet of Things technology. In an Internet of Things system, the operating system is the core program that
manages the Internet of Things hardware. It performs memory management, system resource configuration, input and output device control, network and file system
management, etc. Basic business. At the same time, the operating system also provides an interface for users to interact with the Internet of Things system. Users can
Throughout the development history of operating systems, each epoch-making operating system has brought users a new way of interaction. In the early days,
users communicated with computer systems through the command line. Later, users communicated with computer systems through clicks and touches on the user interface.
At the current stage, new interaction methods such as gestures, voice, and multi-device collaboration have emerged. ,With the continuous development of computer
technology, the ,number of smart devices that users are exposed to has also ,increased greatly. How to enable an embedded device with limited ,resources to achieve as
With the development of Internet of Things technology, smart devices have been involved in all aspects of life. However, the current smart devices still mainly rely
on simple message transmission and do not truly realize the sharing of hardware resources between devices. This is not conducive to full Give full play to the hardware
functions of the device, which means that it is difficult for traditional operating systems to truly realize cross-device resource sharing and distributed scheduling between
devices in the Internet of Everything. For example, in a typical distributed scenario, the video opened by the user on the mobile phone should be played on the TV and the
HarmonyOS is a "future-oriented" distributed operating system for all scenarios (mobile office, sports and health, social communication, media entertainment,
etc.). On the basis of traditional single-device system capabilities, HarmonyOS proposes a distributed concept based on the same set of system capabilities and adapted
to multiple terminal forms. It can support multiple terminal devices such as mobile phones, tablets, smart wearables, smart screens, and cars. .
HarmonyOS is a new distributed operating system for the Internet of Everything era. On the basis of traditional single-device system capabilities, HarmonyOS
proposes a distributed concept based on the same set of system capabilities and adaptable to multiple terminal forms. It can support multiple terminal devices such as
HarmonyOS overall follows a layered design, from bottom to top: kernel layer, system service layer, framework layer and application layer. System functions are
expanded step by step according to "System > Subsystem > Function/Module". In a multi-device deployment scenario, some non-essential subsystems or functions/
modules can be tailored according to actual needs. The technical architecture of HarmonyOS is as follows.
Machine Translated by Google
Kernel subsystem: HarmonyOS adopts a multi-kernel design to support the selection of appropriate OS kernels for different resource-constrained devices. The Kernel
Abstract Layer (KAL) shields multi-core differences and provides unified basic kernel capabilities to the upper layer, including process/thread management, memory management, file
Driver subsystem: HDF (Hardware Driver Foundation) hardware driver framework is the HarmonyOS hardware ecosystem
An open foundation provides unified peripheral access capabilities and driver development and management frameworks. Through this driver framework, it includes driver loading,
driver service management and driver message mechanism. It aims to build a unified driver architecture platform to provide driver developers with a more accurate and efficient
development environment, and strives to achieve one-time development and multi-system deployment.
The system service layer is the core capability set of HarmonyOS, which provides services to applications through the framework layer. This layer contains the following
parts:
System basic capability subsystem set: Provides basic capabilities for the operation, scheduling, migration and other operations of distributed applications on HarmonyOS
multiple devices. It consists of distributed soft bus, distributed data management, distributed task scheduling, and Ark multi-language runtime. , public basic library, multi-mode input,
graphics, security, AI and other subsystems. Among them, the Ark runtime provides C/C++/JS multi-language runtime and basic system class libraries, and also provides
static Java programs using the Ark compiler (that is, the parts of the application or framework layer developed using the Java language). Runtime.
Basic software service subsystem set: Provides public and general software services for HarmonyOS, consisting of event notification, telephone, multimedia, DFX
Enhanced software service subsystem set: Provides HarmonyOS with differentiated capability-enhanced software for different devices.
software service, consisting of smart screen proprietary services, wearable proprietary services, IoT proprietary services and other subsystems.
Hardware service subsystem set: Provides hardware services for HarmonyOS, consisting of subsystems such as location services, biometric identification, wearable
According to the deployment environment of different device forms, basic software service subsystem set, enhanced software service subsystem set, hardware service
The service subsystem set can be tailored according to the subsystem granularity, and each subsystem can be tailored according to the functional granularity.
The framework layer provides multi-language user program frameworks and capabilities such as Java/C/C++/JS for HarmonyOS application development.
framework, two UI frameworks (including Java UI framework for Java language, JS UI framework for JS language), and multi-language framework APIs open to the outside world
for various software and hardware services. Depending on the degree of componentization of the system, the range of APIs supported by HarmonyOS devices will also vary.
Machine Translated by Google
The application layer includes system applications and third-party non-system applications. HarmonyOS applications consist of one or more FA (Feature Ability) or PA
(Particle Ability). Among them, FA has a UI interface and provides the ability to interact with users; while PA has no UI interface and provides the ability to run tasks in the background and a
unified data access abstraction. Applications developed based on FA/PA support cross-device scheduling and distribution, providing users with a consistent and efficient application experience.
A variety of devices can realize hardware mutual assistance and resource sharing. The key technologies they rely on include distributed soft bus, distributed
The distributed soft bus is the communication base for distributed devices such as mobile phones, tablets, smart wearables, smart screens, and cars. It provides unified distributed
communication capabilities for the interconnection and interoperability between devices, and provides a senseless discovery between devices. and zero waiting for transmission is created.
Developers only need to focus on the implementation of business logic and do not need to pay attention to the networking method and underlying protocols. The schematic diagram of the
The distributed device virtualization platform can realize resource integration, device management, and data processing of different devices. Multiple devices together form a super virtual
terminal. For different types of tasks, we match and select execution hardware with appropriate capabilities for users, allowing services to flow continuously between different devices and giving
full play to the capabilities and advantages of different devices, such as display capabilities, camera capabilities, audio capabilities, interaction capabilities, and sensor capabilities. wait. The
Distributed data management is based on the capabilities of distributed soft buses to achieve distributed management of application data and user data. User data is no longer bound to a single
physical device, and business logic is separated from data storage. Cross-device data processing is as convenient and fast as local data processing, allowing developers to easily implement data storage,
sharing and access in all scenarios and on multiple devices. , creating basic conditions for creating a consistent and smooth user experience. The schematic diagram of distributed data management is as
follows:
Distributed task scheduling is based on technical features such as distributed soft bus, distributed data management, and distributed Profile to build a unified distributed service management
(discovery, synchronization, registration, and invocation) mechanism to support remote startup of cross-device applications. Operations such as remote calling, remote connection, and migration can
select appropriate devices to run distributed tasks based on the capabilities, locations, business operating status, resource usage, and user habits and intentions of different devices.
The following figure takes application migration as an example to briefly demonstrate the distributed task scheduling capabilities.
Machine Translated by Google
HarmonyOS provides a user program framework, Ability framework and UI framework, which supports the reuse of business logic and interface logic of multiple terminals during
the application development process. It can realize one-time development and multi-terminal deployment of applications, and improve the development efficiency of cross-device applications.
Among them, the UI framework supports two development languages, Java and JS, and provides a wealth of polymorphic controls, which can display different UI effects on
mobile phones, tablets, smart wearables, smart screens, and cars. Adopting the industry's mainstream design methods, it provides a variety of responsive layout solutions, supports grid
Through design methods such as componentization and miniaturization, HarmonyOS supports on-demand flexible deployment of a variety of terminal devices and can adapt to
different types of hardware resources and functional requirements. It supports the automatic generation of component dependencies through compilation chain relationships, forming a
component tree dependency graph, supporting the convenient development of product systems and lowering the development threshold of hardware devices.
ÿSupports the selection of each component (components are optional): According to the form and needs of the hardware device, you can select the required components
pieces.
ÿSupports the configuration of function sets within components (components can be large or small): According to the resource conditions and functional requirements of the hardware device,
you can choose to configure the function sets in the components. For example, choose to configure some controls in the Graphics Frame component.
ÿSupport dependencies between components (the platform can be large or small): Based on the compilation chain relationship, componentized dependencies can be automatically
generated. For example, selecting a graphics framework component will automatically select dependent graphics engine components, etc.
On distributed terminals equipped with HarmonyOS, it can be ensured that "the right people, through the right devices, use the data correctly."
ÿEnsure the “correct person” through “distributed multi-terminal collaborative identity authentication”.
ÿEnsure “correct use through “classification and hierarchical management of data in the process of distributed data flowing across terminals”
data".
In the distributed terminal scenario, "correct people" refer to data visitors and business operators who have passed identity authentication. "The right person" is a
prerequisite to ensure that user data is not illegally accessed and user privacy is not leaked. HarmonyOS implements collaborative identity authentication through the following
three aspects:
ÿZero trust model: HarmonyOS is based on the zero trust model to implement user authentication and data access control. When a user needs to access data resources across
devices or initiate high-security business operations (for example, operations on security equipment), HarmonyOS will authenticate the user to ensure the reliability of
their identity.
ÿMulti -factor integrated authentication: HarmonyOS uses user identity management to associate authentication credentials that identify the same user on different devices to identify
ÿCollaborative and mutual assistance authentication: HarmonyOS realizes resource pooling of different devices and mutual assistance and sharing of capabilities by decoupling
hardware and authentication capabilities (that is, information collection and authentication can be completed on different devices), allowing high-security devices to Assist
In a distributed terminal scenario, only by ensuring that the equipment used by users is safe and reliable can user data be protected in the virtual environment.
ÿSecure boot: Ensure that the system firmware and applications running on each virtual device at the source are complete and untampered. Through secure boot, image packages
from various device manufacturers are less likely to be illegally replaced with malicious programs, thereby protecting user data and privacy.
Machine Translated by Google
ÿTrusted execution environment: Provides a hardware-based trusted execution environment (TEE, Trusted Execution
Environment) to protect the storage and processing of users' personal sensitive data and ensure that the data is not leaked. Due to the
different security capabilities of distributed terminal hardware, users' sensitive personal data needs to be stored and processed using high-
security devices. HarmonyOS uses a TEE microkernel based on mathematically provable formal development and verification, and has
ÿDevice certificate authentication: Supports presetting device certificates for devices with trusted execution environments to prove their security capabilities to other
virtual terminals. For devices with a TEE environment, pre-configured PKI (Public Key Infrastructure) device certificates are used to provide proof of device
identity to ensure that the device is legally manufactured. The device certificate is preset on the production line, and the private key of the device certificate is
written and safely stored in the TEE environment of the device, and is only used within the TEE. When the user's sensitive data (such as keys, encrypted
biometrics, etc.) must be transmitted, a secure channel is established from the TEE of one device to the TEE of another device after using the device
certificate for security environment verification. Achieve secure transmission. As shown below:
protection around data generation, storage, use, transmission and destruction, thereby ensuring that personal data and privacy, as well as system
ÿData generation: Classify the data according to the laws, regulations and standards of the country or organization where the data is located, and set the corresponding
protection level according to the classification. From the moment data is generated, data of each protection level needs to be provided with different strengths
of security protection according to the corresponding security policies throughout its entire life cycle of storage, use, and transmission. virtual hyperterminal
Machine Translated by Google
The access control system supports tag-based access control policies to ensure that data can only be stored, used and transmitted between virtual
ÿData storage: HarmonyOS securely protects data by distinguishing the security level of data and storing it in partitions with different security protection
capabilities. It also provides cross-device seamless flow of keys throughout the life cycle and cross-device key access control capabilities, supporting
Distributed identity authentication collaboration, distributed data sharing and other services.
ÿData usage: HarmonyOS provides a trusted execution environment for devices through hardware. Users' personal sensitive data are only used in the trusted
execution environment of distributed virtual terminals to ensure that the security and privacy of user data are not leaked.
ÿData transmission: In order to ensure the safe flow of data between virtual hyper terminals, each device needs to be correct and trustworthy, establish a trust
relationship (multiple devices establish a pairing relationship through Huawei accounts), and be able to establish a trust relationship after verifying the
trust relationship. A secure connection channel transmits data securely according to the rules of data flow. When communicating between devices, the
device needs to be authenticated based on its identity credentials, and based on this, a secure encrypted transmission channel is established.
ÿData destruction: Destroying the key means destroying the data. The storage of data in virtual terminals is based on keys. When destroying data, only the
The development process of HarmonyOS devices is roughly divided into the following steps: code writing, code compilation, image burning, and serial port debugging. In
this course, the equipment is developed based on the Hi3861 chip development board. The development environment involves Windows operating system and Linux operating
system. The development environment and development process are shown in the figure below. The current Hi3861 chip can support Windows compilation and obtain
code through HPM. However, Hi3516 and Hi3518 chips still need to be compiled in the Linux environment. In this course, we use the Windows+Ubuntu method to build the project.
During the device development process, the compilation and execution of software are not performed on the same device. The compilation is done on the computer, and the
compiled product is executed on the development board. This process is called cross-compilation. In the embedded development of hardware devices, it is extremely necessary to
use cross-compilation development methods. This is mainly because the hardware resources of the embedded operating system are too few to complete the compilation of source
code, which requires the compilation work to be handed over to It is performed by a device with more resources, and the hardware device only performs the execution of the compiled
product.
During the development process of HarmonyOS hardware devices, the HUAWEI DevEco Device Tool development tool (hereinafter referred to as DevEco Device Tool)
will be used. DevEco Device Tool is a one-stop integrated development environment provided by HarmonyOS for smart device developers. It supports on-demand customization
of HarmonyOS components, supports code editing, compilation, burning and debugging functions, and supports C/C++ language in the form of plug-ins. Deployed on Visual
Studio Code.
DevEco Device Tool supports Windows and Ubuntu systems, and supports code search, code highlighting, code automatic completion, code input prompts, code
inspection, etc. Developers can code easily and efficiently; it supports multiple types of development boards, including Huawei HiSilicon Hi3516/Hi3518 series and Hi3861 series
Xr872, Neptune development board, BearPi-HM Nano development board; supports single-step debugging capabilities and viewing of memory, variables, call stacks, registers,
The DevEco Device Tool tool is mainly divided into the following 4 functional areas. Tool control area: Provides functions such as project import,
configuration, burning, and debugging. Code editing area: Provides code viewing, writing and debugging functions. Output console: Provides functions such as
printing operation logs, inputting debugging commands, and command line tools. Quick control function: Provides shortcut operation commands for DevEco
Device Tool tools, such as Build, Upload, Erase and other functions.
The kit includes a core board, a base board, a traffic light board, a colorful light board, an environmental detection board, an OLED display board, and an
The core board is the Hi3861 WLAN module, which is a development board about 2cm*5cm in size. The core board mainly includes the following
components: Hi3861 module, CH340USB to serial port chip, USB Type-C interface, reset button, programmable USER Buttons, programmable LED lights, three
jumper caps.
The Hi3861 chip integrates Flash to store static data such as binary file code and configuration parameters, the CPU is used to execute programs, and
the SRAM is used to save data when the program is running. The built-in Wi-Fi function provides network connection capabilities for applications.
The programmable USER button (labeled USER) and the programmable LED light (labeled LED1) can be controlled by the user program and can be
Among the three jumper caps on the core board, two parallel jumper caps are used to connect the main control chip and the serial port chip. After unplugging
the two parallel jumper caps, the UART interface of the main control chip will be disconnected. Connection of CH340 USB to serial port chip. After unplugging the
independent jumper cap, the connection between the main control chip and the programmable LED light will be disconnected.
Machine Translated by Google
The Hi3861 WLAN module is a highly integrated 2.4GHz WLAN SoC chip that integrates IEEE 802.11b/g/n baseband and RF (Radio Frequency) circuits. Supports HarmonyOS and
provides an open and easy-to-use development and debugging operating environment. Hi3861 WLAN baseband supports Orthogonal Frequency Division Multiplexing (OFDM) technology, is
backward compatible with Direct Sequence Spread Spectrum (DSSS) and Complementary Code Keying (CCK) technology, and supports various data rates of the IEEE 802.11 b/g/n
protocol. . The Hi3861 WLAN module has very limited resources. The entire board has a total of 2MB FLASH and 352KB RAM. When writing business code, you need to pay attention to resource
usage efficiency. During the experiment of this course, the business code was mainly implemented in C language.
In addition, the Hi3861 WLAN module can also expand its peripheral capabilities by connecting to the Hi3861 base board, as shown in the figure below.
The bottom board contains two vertical pin header sockets, which can be plugged into the core board, four horizontal pin header sockets and four horizontal pin headers.
The two pin header sockets in the middle of the base plate can be plugged into the OLED display panel. The two pin header sockets on the right side can be used to connect traffic light
panels, colorful light panels or environmental detection boards. The pin header on the middle upper edge of the development board can be used to connect an external NFC board. The NFC
board can also be accessed using the NFC board interface. You can use one of the two access methods.
The lower left corner of the base plate in the picture contains a battery socket and a power supply switch. During the software debugging stage, the USB cable can be used to directly
supply power to the motherboard. After the program debugging is completed, the battery can be used for power supply or the USB cable can be connected to a mobile power supply for power
supply.
Machine Translated by Google
The traffic light board mainly includes the following components: three LED lights of different colors (red, yellow and green), a button and a bee
Buzzer, these components are all programmable components, and developers can call and control them according to needs.
The environmental detection board mainly contains the following components: an AHT20 digital temperature and humidity sensor, an MQ-2 combustible gas sensor,
and a buzzer. The temperature and humidity sensor can simultaneously measure the temperature and humidity in the environment where the current sensor is located. The
gas-sensitive material used in the combustible gas sensor is tin dioxide. In clean air, the conductivity of tin dioxide is low, and the resistance of the sensor is high. In an
environment with combustible gas or smoke, the resistance of tin dioxide is The conductivity increases and the sensor resistance value is low at this time. It converts the
resistance value of the sensor into a voltage change output through a series voltage dividing circuit, thereby realizing the conversion of combustible gas concentration into an
electrical signal.
The colorful light board mainly includes the following components: a three-color LED colorful light, a photoresistor and a human body infrared sensor. Three-color
LED lights specifically have three colors: red, green and blue. The resistance value of the photoresistor is strongly related to the light intensity. In the ADC experiment of
this course, when the light intensity is insufficient, the serial port output value of the ADC is around 1800, while When the light intensity is sufficient, the serial port output
The NFC board mainly contains the following components: FM11C08I NFC chip, two-digit DIP switch, and printed circuit NFC coil.
The NFC coil is used to receive NFC signals, the FM11C08I NFC chip is used to encode and decode NFC signals. At the same time, the NFC chip is also used to
communicate with the main control chip, and the two-digit DIP switch is used for function selection. In the experiments of this course, this expansion board was not used.
Interested students can learn how to use this expansion board by themselves.
The main control chip of the HiSpark Pegasus smart home kit uses the Hi3861 chip. The Hi3861 chip integrates a high-performance 32-bit microprocessor, a
hardware security engine and a wealth of peripheral interfaces. The peripheral interfaces include SPI (Synchronous Peripheral Interface), UART (Universal Asynchronous
Receiver & Transmitter), I2C (The Inter Integrated Circuit), PWM (Pulse Width Modulation), GPIO (General Purpose Input/Output) and multi-channel ADC (Analog to
Digital Input/Output) interface, the highest clock can reach 50MHz; the chip has built-in SRAM (Static Random Access Memory) and Flash, can run independently,
Hi3861 chip is suitable for IoT smart terminal fields such as smart home appliances. Its functional framework is shown in the figure below.
development kit includes a BearPi-HM Nano mainboard, a basic expansion board, an E53_IA1 expansion board, and a flower protector board.
BearPi-HM Nano motherboard is a HarmonyOS development board specially designed for HarmonyOS. It has a highly integrated 2.4GHz WLAN SoC
chip Hi3861 onboard, as well as an onboard NFC circuit and standard E53 interface. The standard E53 interface can be expanded to smart humidifiers and
smart desk lamps. , smart security, smart smoke sensor and other cases, and most of the pins of the main control chip are led out through double rows of pins to
The main control chip Hi3861 baseband of the BearPi-HM Nano motherboard supports Orthogonal Frequency Division Multiplexing (OFDM)
technology, integrates IEEE 802.11b/g/n baseband and RF circuits, supports 20MHz standard bandwidth and 5MHz/10MHz narrow bandwidth, and integrates
high performance 32bit microprocessor, hardware security engine and rich peripheral interfaces, compatible with direct sequence spread spectrum (DSSS) and
The Typec port of the BearPi-HM Nano motherboard has power supply, programming, and log printing functions. The onboard NFC circuit allows
developers to write custom data, such as text, URL, application package name and other information. There are two user-programmable buttons on the board,
The basic expansion board mainly includes the following components: a GPIO-controlled LED light, which can be used for GPIO driver development
and learning; a PWM-controlled buzzer, which can be used for PWM driver development and learning; an ADC output photosensitive sensor, which can be
used for ADC driver development and learning ; An IIC-driven atmospheric pressure sensor and three-axis acceleration sensor can be used for IIC-driven
development and learning; an SPI-driven LCD screen can be used for SPI-driven development and learning. On this basic expansion board, most peripheral
The E53_IA1 expansion board mainly includes the following components: a high-precision temperature and humidity sensor SHT30, a light intensity
sensor BH1750, a purple lamp and a DC motor. The temperature and humidity sensor can measure the temperature and humidity in the environment where the
current sensor is located. The light intensity sensor can detect the light intensity of the current environment. The purple lamp can simulate the lighting equipment
in the home, and the DC motor can simulate the fan equipment in the home.
Machine Translated by Google
The Flower Guardian Board mainly contains the following components: a soil moisture collection sensor, a temperature and humidity sensor SHT30, a
pumping motor and a battery box. The soil moisture collection sensor can collect the current plant soil moisture in real time. When the soil moisture is found to be too
low, the water pumping motor can be controlled through the mobile phone to pump water and water the plants.
During the experiment of this course, both the Linux compilation environment and the Windows development environment were used. By building
an Ubuntu virtual machine on Windows, and then using the samba tool to build a shared folder between the local virtual machine and Windows, resource sharing
between the Windows environment and the Ubuntu environment is achieved. After obtaining the source code, decompress the full source code and store it in a shared
folder, and build the compilation environment through docker on Ubuntu. This completes the construction of the code compilation environment.
The development process is shown in the figure below. First modify the source code in the shared folder to complete the source code editing; complete the
code compilation in the Ubuntu environment, and the compiled product is stored in the out folder under the source code directory. Because the source code is stored
in the shared folder, the compiled product at this time is also In the shared folder; finally obtain the compiled product in the Windows environment, perform image
For the experiments and cases in this course, the development environment installation used in this book specifically involves the following software:
Windows10 operating system, Oracle VM Virtual Box virtual machine, Ubuntu20.04 image, Visual Studio
Code, DevEco Device Tool. Oracle VM Virtual Box virtual machine is used on Windows 10 operating system
Build a virtual machine operating environment. The virtual machine image uses Ubuntu20.04 and Visual Studio installed on Windows 10.
Please refer to the table below for specific software version requirements in Windows environment. In the Windows development environment, Visual
Studio Code is the main code editing and debugging. In addition, you can also use the terminal tool of Visual Studio Code through
Connect to the local virtual machine through ssh, and operate the virtual machine environment to compile the code. Python and Node.js are both installed
The necessary environment to install the DevEco Device Tool tool, you need to first complete the installation of the Python environment and the Node.js environment.
Visual Studio Code code editing tool V1.53 and above 64-bit version
HUAWEI DevEco Device Tool (hereinafter referred to as DevEco Device Tool) is HarmonyOS oriented to smart devices.
A one-stop integrated development environment provided by energy device developers, supporting on-demand customization of HarmonyOS components and code editing
Machine Translated by Google
It supports editing, compiling, burning and debugging functions, supports C/C++ language, and is deployed on Visual Studio Code in the form of a plug-in.
DevEco Device Tool supports Windows and Ubuntu systems and has the following features:
1) Supports code search, code highlighting, code automatic completion, code input prompts, code inspection, etc. Developers can
2) Supports multiple types of development boards, including Huawei HiSilicon's Hi3516/Hi3518 series and Hi3861 series development boards, and third-party
manufacturers' Imx6ull, Rtl8720, Xr872, Neptune development boards, and BearPi-HM Nano development boards.
3) Support single-step debugging capabilities and view debugging information such as memory, variables, call stacks, registers, and assembly.
The DevEco Device Tool tool is mainly divided into the following 4 functional areas.
1) Tool control area: Provides functions such as project import, configuration, burning, and debugging.
2) Code editing area: Provides code viewing, writing and debugging functions.
3) Output console: Provides functions such as printing operation logs, inputting debugging commands, and command line tools.
4) Quick control function: Provides shortcut operation commands for DevEco Device Tool tools, such as Build, Upload,
With HarmonyOS officially open source, the HPM package manager also emerged. HPM, the full name of HarmonyOS Package Manager, is a management
and distribution tool for HarmonyOS component packages. HPM is mainly a tool set for device developers to obtain/customize HarmonyOS source code and perform
HPM has an inseparable relationship with HarmonyOS component development. HarmonyOS software uses components (bundles) as the basic unit. From a
system perspective, any software running on HarmonyOS can be defined as a component; generally speaking, according to the application scope of the component,
It can be divided:
Board-level components: components related to device hardware such as board, arch, and mcu.
System components: A collection of independent functions, such as kernel, file system, framework, etc.
Application components: applications that directly provide services to users (such as wifi_iot, ip_camera).
From a formal point of view, components are born for reuse. All modules that can be reused can be defined as components, which can be divided into: source
code, binary, code snippet, and release version. A component usually corresponds to a code repository, and bundle.json, README files, and LICENSE description files
are added based on the code. A distribution is made up of multiple components. The distribution integrates various components of a complete system (such as drivers,
kernels, frameworks, applications) and can be used for device burning. The relationship between components and releases is shown in the figure below.
In principle, component division should be divided into fine-grained components as much as possible to maximize reuse. The division principles mainly consider
Independence: The functions of components should be relatively independent, support independent compilation, and can provide external interfaces and services independently;
Coupling: If a component must depend on other components in order to provide external services, it should be considered to merge with the dependent component
Dependency: If a group of components jointly complete a function and are not dependent on other components and have no possibility of being relied on in the
There are two types of component dependencies: required dependencies and optional dependencies. Required dependency means that when component A
completes a certain function, it must introduce component B and call B's interface or service to complete it. B is called a required dependency of A. Optional
dependency means that when component A completes a certain function, it can introduce component C or component D. C and D can replace each other. C and
For the Ubuntu development environment, there is currently a corresponding Docker, which encapsulates related compilation tools. Select
Developers who choose to use Docker can skip this section. If you choose to manually build the compilation environment in the Linux environment, choose
https://device.harmonyos.
uce/oem_minitinier_envir
LLVM Compilation tool chain
onment_lin-0000001105407498
hb HarmonyOS compile and build command line
Tool
The Python tool is the most basic compilation and construction tool. It needs to be downloaded first, and the version requires 3.7 and above.
Python3.8 is installed by default on Ubuntu20.04 (the version numbers may not be uniform, but they are all version 3.7 and above), use
Command python3 --version to view the Python version of the current system.
ninja is a small build system focused on speed. Gn is the abbreviation of Generate ninja, which is used to generate ninja.
document. The compilation and construction subsystem of HarmonyOS is based on gn and ninja, and the compilation and construction subsystem supports
OpenHarmony is a compilation framework for component-based development that mainly provides the following functions: building existing products and independently building chip factories
LLVM is a compilation toolchain, more precisely a cross-compilation toolchain. The meaning of tool chain can be viewed from two aspects
The explanation, on the one hand, is about the tool, and on the other hand, the chain. The meaning of tools includes the process of compilation and linking, such as a
The process of compiling C language source files to generate preprocessing files, assembly files, and object files is a typical compilation process.
process, the compiler used here is gcc, and linking is to generate elf from multiple object files or library files through the ld linker
Executable file process. The "chain" in the tool chain means that there is not just one tool, but multiple, and these multiple tools
It needs to be executed in a specific order. Different source files have a specific compilation sequence. In this process, you need to use
hb is a command line tool for HarmonyOS compilation and build. Commonly used commands include hb clean to clear the last compilation products.
Create a new project. In the Ubuntu environment, you can obtain the source code and corresponding tool chain through HPM.
This new project can be used in the Windows environment to gain the ability to program images.
Machine Translated by Google
Open the terminal tool and execute the following command to check the hpm network status. If the network is abnormal, please check the network settings or configure the
hpm agent.
Open the DevEco Device Tool, enter the Home page, and click New DevEco Project to create a new project. exist
On the configuration wizard page of the new project, enter the project configuration information and click Finish. The new project has the following parameters:
Name: Enter the project name, which can only contain uppercase letters, lowercase letters, numbers, underscores (_), dashes (-) and dots (.).
Location: Set the storage path of the project. By default, it is stored in the default path specified by the tool. If you need to change the storage path, please uncheck the Use
After the project is created, click the Open button in the Project below to open the created project. Open project
Finally, the project directory is as shown below, containing only the configuration files automatically generated by the tool.
Machine Translated by Google
Click HPM, select the solution in the list or find it in the search box, and click
Install to project, select the project name, and download the source code. The download process takes about 5 minutes, please wait patiently and do not leave
the current interface. After the download is completed, you can view the corresponding source code files in the project directory on the left.
Open DevEco Project is suitable for opening engineering projects created by DevEco Device Tool and obtained
HarmonyOS source code. If the HarmonyOS source code is opened, when opening the project, the DevEco Device Tool will
prompt whether it needs to be opened. Click Open, and then configure the development board type and project structure type.
Open the DevEco Device Tool, enter the Home page, and click Open DevEco Project to open the project.
Machine Translated by Google
Set the development board type and Framework. Where Framework represents the source of the source code:
Ohos-sources: Obtained from the open source community mirror site or code warehouse, the source code version information is 1.0.0.
Hb: Obtained through the open source community mirror site or code warehouse, the source code version information is 1.0.1 or 1.1.0.
Machine Translated by Google
During the process of compiling and burning source code using DevEco Device Tool, different development boards rely on different tool chains.
Developers usually need to obtain the corresponding tool chains based on the development board. DevEco Device Tool provides unified management
functions for these required tool chains. Developers only need to set the local path of the tool chain on DevEco Device Tool. During the compilation
and burning process, DevEco Device Tool will automatically call the corresponding tools. .
DevEco Device Tool has preset commonly used tool chains, as shown in the figure below. When developers perform related operations,
DevEco Device Tool supports source code compilation of HiSilicon (such as Hi3861/Hi3516/Hi3518, etc.) series and third-party development
boards (W800/BL602/Xr872, etc.). If you get it from HPM (Package Manager) component-wise, the relevant toolchain is automatically included in HPM,
and there is no need to set it manually. If the source code is obtained from the mirror site or from the code repository, the developer needs to manually
DevEco Device Tool supports Winodws compilation environment and currently supports Hi3861’s one-click source code compilation capability;
The source code of the Hi3861 development board can be downloaded from this link address
https://repo.huaweicloud.com/harmonyos/os/windows_code/code-20210414_1459.tar.gz
After the download is complete, unzip the source code package and use Visual Studio Code to open the project. When setting up the Framework,
Please select Hb. Developers also need to download and configure the development tool chain through the following link.
https://repo.huaweicloud.com/
ninja The folder where ninja.exe is located harmonyos/compiler/ninja/1.9.
0/windows/ninja-win.zip
https://repo.huaweicloud.com/
harmonyos/compiler/gn/1744/
G The folder where gn.exe is located
windows/gn-windows-amd64.zip
http://www.hihope.org/downlo
gcc_riscv32 hcc_riscv32_win folder
ad/download.aspx?mtt=34
https://sourceforge.net/projects
tool_msys msys\bin folder
/mingw/
After the tool chain packages are ready, start compiling the source code. In Projects, click on the project
Settings button, in the hi3861 configuration tab, set the source code compilation type build_type, the default is the release class
type, you can modify it as needed, and then click the Save button to save it.
Machine Translated by Google
After saving, click Open to open the Hi3861 project. Click the DEVECO icon on the left to open the DevEco Device Tool interface. In PROJECT TASKS, click the
Build button under the corresponding development board to execute compilation. Wait for the compilation to be completed. The command line window will output SUCCESS. The
compilation is completed. The compiled file is located in the out directory of the project file and can be used for subsequent burning.
DevEco Device Tool supports Ubuntu compilation environment and currently supports HiSilicon
(Hi3861/Hi3516/Hi3518, etc.) and third-party manufacturer development boards, chip modules (Neptune/BL602/Rtl8720, etc.) source code one-click compilation function. For
different development boards, the compiled source code is obtained in different ways, and the corresponding tool chain requirements for source code compilation are also different. The
source code obtaining methods and tool chain setting requirements required by each development board are as shown in the following table.
List of development board models and corresponding source code acquisition methods
Development board model How to obtain source code Compile toolchain settings
Hi3861 development board obtain it component-wise from HPM (Package Manager) No additional toolchain setup required
Neptune development board obtain it component-wise from HPM (Package Manager) No additional toolchain setup required
BearPi-HM Nano development board obtain it component-wise from HPM (Package Manager) No additional toolchain setup required
deveco_studio@huawei.com to obtain the source code. Please refer to setting up the BL602 compilation
BL602 chip module
When importing the source code, the Framework tool chain
deveco_studio@huawei.com to obtain the source code. Please refer to Setting up Rtl8720 compilation
Rtl8720 chip module
When importing the source code, the Framework tool chain
After obtaining the source code, you need to set up the compilation tool chain for the development board. The specific compilation tools corresponding to different development boards
For translation tool chain settings, please refer to the table below.
Development board model kit name Get address The path set in Tools
https://repo.huaweicloud.com/harmonyos/compiler/
https://repo.huaweicloud.c om/harmonyos/compiler/
ar
Machine Translated by Google
https://repo.huaweicloud.c om/
gcc_riscv3 2 harmonyos/compiler/g cc_riscv32/7.3.0/
Hi3861 development board gcc_riscv32 folder
linux/gcc_r iscv32-linux-7.3.0.tar.gz
https://repo.huaweicloud.com/
Hi3516 development board
harmonyos/compiler/gn/1717/linux/gn- The folder where the gn executable file is
G
/Hi3518 development board linux-x86-1717.tar.gz located
https://repo.huaweicloud.c om/
Hi3516 development board
harmonyos/compiler/n inja/1.9.0/linux/
ninja llvm\bin folder
/Hi3518 development board ninja.1.9.0.t
ar
https://repo.huaweicloud.com/
harmonyos/compiler/cl ang/9.0.0-36191/
Hi3516 development board
llvm linux/llvm- gcc_riscv32 folder
/Hi3518 development board
linux-9.0.0-36191.tar
https://repo.huaweicloud.c om/
Hi3516 development board The folder where the hc-gen executable
harmonyos/compiler/h c-gen/0.65/linux/
hc-gen
/Hi3518 development board hc-gen-0.65-linux.tar file is located
https://repo.huaweicloud.com/
harmonyos/compiler/gn/1717/linux/gn- The folder where the gn executable file is
Neptune Development Board Gn
linux-x86-1717.tar.gz located
https://repo.huaweicloud.c om/
harmonyos/compiler/n inja/1.9.0/linux/ The folder where the ninjia executable file
Neptune development board ninja
ninja.1.9.0.t is located
ar
https://occ.t-
minilibc-20210423.tar
https://repo.huaweicloud.com/
harmonyos/compiler/gn/1717/linux/gn- The folder where the gn executable file is
BL602 chip module Gn
linux-x86-1717.tar.gz located
ar
https://repo.huaweicloud.c om/
gcc_riscv3 2 harmonyos/compiler/g
BL602 chip module gcc_riscv32 folder
cc_riscv32/7.3.0/linux/gcc_r iscv32-
linux-7.3.0.tar.gz
https://repo.huaweicloud.com/
harmonyos/compiler/gn/1717/linux/ The folder where the gn executable file is
Rtl8720 chip module Gn
gn-linux-x86-1717.tar.gz located
https://repo.huaweicloud.c om/
harmonyos/compiler/n inja/1.9.0/ The folder where the ninjia executable file
Rtl8720 chip module ninja
linux/ninja.1.9.0.t is located
ar
https://repo.huaweicloud.com/
harmonyos/compiler/cl ang/
Rtl8720 chip module llvm 9.0.0-36191/ llvm\bin folder
linux/llvm-linux-9.0.0-36191.tar
https://repo.huaweicloud.com/
Asr582x chip harmonyos/compiler/gn/1717/linux/ The folder where the gn executable file is
G
module gn-linux-x86-1717.tar.gz located
https://repo.huaweicloud.c om/
Asr582x chip harmonyos/compiler/n inja/1.9.0/ The folder where the ninjia executable file
ninja
module linux/ninja.1.9.0.t is located
ar
https://repo.huaweicloud.com/
harmonyos/compiler/cl ang/
Asr582x chip
llvm 9.0.0-36191/ llvm\bin folder
module
linux/llvm-linux-9.0.0-36191.tar
Since the source code compilation method and process of each development board are the same, the only difference lies in the
dependent compilation tool chain. Therefore, the Hi3516DV300 development board is used as an example to illustrate the compilation process. In
Projects, click the Settings button of the project. In the development board configuration tab (such as hi3516dv300), set the source code
compilation type build_type. The default is release type. Please modify it as needed. Then click the Save button to save.
Machine Translated by Google
After saving, click Open to open the project, click the DEVECO icon in the left menu bar to open the DevEco Device Tool interface, and in
PROJECT TASKS, click the Build button under the corresponding development board to execute compilation. After the compilation is successful, SUCCESS
will be output in the command line window, indicating that the compilation is completed, and the compilation products are located in the out directory of the
project project.
In the experiment of this course, the hb command is used to compile the code. The specific process is as follows. Visual
Studio Code remotely connects to the Linux host through ssh using the Terminal function.
Enter ssh harmonyos@192.168.56.104 in the terminal window and enter the password.
Machine Translated by Google
hb set
hb build
DevEco Device Tool provides a one-click burning function, which is simple to operate and can complete program burning quickly and efficiently, improving burning
efficiency.
Hi3861 series development boards support serial port programming and only support Windows systems. Please connect the computer and the development
board to be burned. You need to connect the USB port. Open the computer's device manager, view and record the corresponding serial port number. This serial port will
be used for programming and serial port debugging. If the corresponding serial port is abnormal, please install the USB to serial port driver according to the Hi3861
Open DevEco Device Tool, in Projects, click Settings to open the project configuration interface.
Machine Translated by Google
On the Partition Configuration tab, set the file information to be burned. By default, DevEco Device
The Tool has been adapted for the Hi3861 series development boards and does not need to be modified separately.
On the hi3861 tab, set the burning options, including upload_port, upload_partitions and upload_protocol.
After all configurations have been modified, click Save at the top of the project configuration tab to save.
Open the project file, in the PROJECT TASKS of the DevEco Device Tool interface, click the Upload button under hi3861
to start burning.
Machine Translated by Google
After starting burning, when it shows that the device is being connected, press the RST button on the development board to restart the development board. Then power on again
and start burning. When the interface prompts the following information, it means that the burning is successful.
In addition to using DevEco Device Tool for burning, you can also use HiBurn tool for burning, such as using
Use Hiburn to burn the Hi3861_wifiiot_app_allinone.bin file to the Hi3861 development board.
First you need to get the Hiburn tool. DevEco also calls the HiBurn burning tool during the actual burning process. In fact, the
HiBurn tool can be decompressed and separated from the DevEco Device Tool tool. Interested students can check the related
methods. The recommended acquisition method here is to directly find the download address of the HiBurn burning tool from the
51CTO official cooperation community https://harmonyos.51cto.com/resource/29, and open HiBurn after the download is completed.
Machine Translated by Google
Click Setting->Com settings in the upper left corner of the interface to enter the serial port parameter setting interface. On the serial port parameter setting
interface, Baud is the baud rate. The default is 115200. You can choose 921600, 2000000, or 3000000 (the fastest supported value in actual measurements). Other
Click "Select file" to pop up the file selection dialog box, select the allinone.bin file generated by compilation. This bin is actually a merged file of multiple bins,
which can also be seen from the naming. For example, I selected Z:\harmonyos\
openharmony\out\wifiiot\Hi3861_wifiiot_app_allinone.bin Check "Auto burn" to automatically download multiple bin files. At this point, the configuration is
Click Connect to connect the serial port device. At this time, HiBurn will open the serial port device and try to start programming. You need to ensure that no
other program is occupying the serial port device (you may be using HyperTerminal or Serial Port Assistant to view the serial port log before programming. You need to
ensure that other software has been used. Close the currently used serial port); Reset the device and press the RESET button on the development board; Wait for
Execution Successful to appear in the output box, which means the programming is successful.
Machine Translated by Google
After successful burning, you need to manually click "Disconnect" to disconnect the serial port, otherwise it will prompt "Wait connect success"
Burning with HiBurn has the following main advantages: it does not rely on VSCode, and you do not need to install VSCode,
nodejs, JDK, and some npm packages; the download speed is faster, the maximum baud rate of HiBurn.exe can be set to 3000000, and
the DevEco Device Tool The maximum value can only be 921600.
The main disadvantages of using HiBurn to burn are as follows: you need to manually click Disconnect to actively disconnect, otherwise
the download will be repeated by default; after successful burning, if you do not disconnect the serial port and press the RESET button again, it will
burn again. ; The serial port parameters of HiBurn cannot be saved, and you need to reset them the next time you open the program after closing
it, while DevEco can save the serial port parameters; HiBurn has more operating steps and is slightly more complicated than DevEco.
DevEco Device Tool integrates serial port tools, which can be conveniently used to connect to the development board. Mainly used
Hi3516/Hi3518 series development boards: use the serial port tool to run the image;
Machine Translated by Google
Hi3861 series development board: Use the serial port tool to execute AT commands.
The specific steps are as follows. First, click the Monitor button on the toolbar to open the serial port tool.
Then in TERMINAL, enter the corresponding number to select the serial port.
Finally, after entering the serial port tool, execute the relevant instructions to perform serial port debugging.
Machine Translated by Google
3 Kernel basics
Rely on the access capabilities provided by the kernel to the computer hardware. Kernel-related knowledge is the basis for developing HarmonyOS hardware
devices. This chapter mainly analyzes the main operating mechanism of the HarmonyOS kernel and introduces how the kernel manages the system's processes
and threads, memory and network, file system, and device drivers.
HarmonyOS specifically uses the LiteOS kernel and the Linux kernel, and the LiteOS kernel is divided into
The LiteOS-A kernel and LiteOS-M kernel are suitable for Cortex A series chips and Cortex M series chips respectively. For lightweight systems and small
systems, you can choose LiteOS; for standard systems, you can choose Linux. The LiteOS kernel is a real-time operating system kernel for the IoT field. It has
the characteristics of RTOS lightness and Linux ease-of-use. It mainly includes basic core functions such as process and thread scheduling, memory management,
IPC mechanism, and timer management. In HarmonyOS, the kernel source code is divided into two code repositories: kernel_liteos_a and kernel_liteos_m.
Kernel_liteos_a is mainly for small systems, while kernel_liteos_m is mainly for lightweight systems (mini systems).
In order to ensure the ease of integration on different hardware, HarmonyOS currently defines three basic system types. After device developers complete
the configuration of the required component set by selecting the basic system type, they can realize the development of their minimum system. Reference definitions
The lightweight system (mini system) is oriented to MCU processors such as Arm Cortex-M and RISC-V 32-bit devices. The hardware resources are
extremely limited. The minimum memory of the supported devices is 128KiB. It can provide a variety of lightweight network protocols and is lightweight. A large-
scale graphics framework, as well as rich IOT bus reading and writing components, etc. Products that can be supported include connection modules, sensor
Small systems (small systems) are oriented to application processors such as Arm Cortex-A devices. The minimum memory of the supported devices is
1MiB, which can provide higher security capabilities, standard graphics framework, and video encoding and decoding multimedia capabilities. Products that can be
supported include IP Cameras, electronic peepholes, routers in the smart home field, and driving recorders in the smart travel field.
The standard system is oriented to devices with application processors such as Arm Cortex-A. The minimum supported device memory is 128MiB. It can
provide enhanced interaction capabilities, 3D GPU and hardware synthesis capabilities, more controls, and graphics with richer animation effects. capabilities and
HarmonyOS also provides a series of optional system components for device developers to configure as needed to support the expansion or customized
development of its special functions. The system combines these optional system components into a series of system capabilities described as features or functions
LiteOS-M ÿ
LiteOS-A ÿ
Linux ÿ ÿ
consumption and high performance. Its code structure is simple, mainly including the kernel minimum function set, kernel abstraction layer, optional components and
Project directory, etc. The HarmonyOS LiteOS-M kernel architecture includes hardware-related layers and hardware-independent layers, as shown in the figure below
shows that the Kernel Arch module belongs to the hardware-related layer. This module is classified according to different compilation tool chains and chip architectures, and provides
The unified HAL (Hardware Abstraction Layer) interface improves the adaptability of hardware and meets the needs of various AIoT types.
The expansion of rich hardware and compilation tool chains; Components and other modules belong to the hardware-independent layer, among which Kernel Task
The kernel module provides basic capabilities, the Components module provides network, file system and other component capabilities, and the Utils module provides
Provides error handling, debugging and other capabilities, and the KAL (Kernel Abstraction Layer) module provides a unified standard interface.
Configure the system clock and ticks per second in the development board configuration file target_config.h. You can configure tasks, memory,
IPC and exception handling modules are tailored for configuration. When the system starts, the specified module is initialized according to the configuration. Kernel boot
The process includes peripheral initialization, system clock configuration, kernel initialization, operating system startup, etc.
Machine Translated by Google
In HarmonyOS device development, the Hi3861 chip can use the LiteOS-M kernel.
In order to adapt to the rapid development of the IoT industry, the HarmonyOS lightweight kernel is continuously optimized and expanded to bring application development
Developer-friendly development experience and unified and open ecosystem capabilities. The important new features of lightweight kernel LiteOS-A are as follows:
1. Added rich kernel mechanisms: new virtual memory, system calls, multi-core, lightweight IPC (Inter-
Process Communication (inter-process communication), DAC (Discretionary Access Control, autonomous access control) and other mechanisms have enriched the core
capabilities; in order to better compatible software and developer experience, new support for multiple processes has been added to enable memory isolation between applications.
They do not affect each other and improve the robustness of the system.
2. Introducing a unified driver framework HDF (Hardware Driver Foundation): a unified driver standard for equipment manufacturers
Providers provide a more unified access method, making the driver easier to transplant, and strive to achieve one-time development and multi-system deployment.
3. Supports 1200+ standard POSIX interfaces, making application software easy to develop and transplant, and providing application developers with a more friendly
development experience.
4. The lightweight kernel is highly decoupled from the hardware. When new boards are added, the kernel code does not need to be modified.
The lightweight kernel mainly consists of a basic kernel, extended components, HDF framework, and POSIX interfaces. The extended functions of the lightweight kernel
such as file systems and network protocols (which do not run in user mode like microkernels) run in the kernel address space. The main consideration is that direct function calls
between components are much faster than inter-process communication or remote procedure calls. .
Machine Translated by Google
In HarmonyOS device development, the Hi3516 chip and Hi3518 chip can use the LiteOS-A kernel.
The core can configure suitable OS kernels for different resource-limited device products, providing basic operating system capabilities for the upper layer.
Linux kernel version Linux kernel version is divided into stable version and long term support LTS (long term support) version. The stable version
releases a new version approximately every three months, including the latest hardware support, performance improvements, bug fixes, etc. The
disadvantage is that the overall maintenance life cycle is short, and the product software cannot receive long-term stable support.
LTS is a long-term support version. "Long-term support" is reflected in the long-term maintenance of this version of the kernel (fixes for bugs and
security aspects). The general maintenance cycle reaches 6 years. Compared with non-LTS kernel versions with maintenance cycles ranging from 6 months
to 2 years, a commercial product cannot cover its complete life cycle and is likely to expose the product to the risk of security vulnerabilities. . And the LTS version
update will not include new feature upgrades, ensuring the stability of the version. This LTS version is more suitable for commercial products that pursue
The Linux kernel selects an appropriate version from the LTS version as the kernel base version. Currently, most devices use the 4.19 kernel. The
4.4~4.14LTS kernel is older, has insufficient support for new features, and is scheduled to no longer be maintained around 2023. It has a short service life
and is not suitable as a first release. The 5.4LTS version is not widely used in released products. 4.19 is more familiar to everyone and can reduce the cycle of
The Linux kernel in HarmonyOS is recommended for devices with memory ÿ1MB. Mainly used in resource-rich areas
On screen devices, such as watches, mobile phones, cars and other devices with rich system resources.
Machine Translated by Google
From a system perspective, a task is the smallest running unit that competes for system resources. Tasks can use or wait for system resources such as CPU and memory
space, and run independently of other tasks. The task module of HarmonyOS LiteOS-M can provide users with multiple tasks, realize switching between tasks, and help users manage
business program processes. The task module has the following features: Supports multi-tasking, one task represents one thread, preemptive scheduling mechanism, high-priority
tasks can interrupt low-priority tasks, and low-priority tasks must be blocked or ended before the high-priority tasks can be obtained Scheduling, tasks with the same priority support
time slice rotation scheduling, with a total of 32 priorities [0-31], the highest priority is 0, and the lowest priority is 31.
Tasks have multiple running states. After the system initialization is completed, the created tasks can compete for certain resources in the system.
Scheduled by the kernel. Task status is usually divided into the following four types:
Ready: The task is in the ready queue and is only waiting for the CPU.
Running: The task is being executed. Blocked: The task is not in the ready
queue. Including the task is suspended (suspend state), the task is delayed (delay state), the task is waiting for a semaphore, a read-write queue or a waiting event, etc.
Exit state (Dead): The task has finished running and is waiting for the system to reclaim resources.
The four status flows of tasks are shown in the figure below.
Ready stateÿRunning state. After the task is created, it enters the ready state. When a task switch occurs, the task with the highest priority in the ready queue
The task is executed and enters the running state, and the task is removed from the ready queue.
Running state ÿ blocking state. When a running task is blocked (suspended, delayed, read semaphore, etc.), the task will be deleted from the ready queue, the task status
changes from running to blocked, and then task switching occurs, with the highest priority in the running ready queue. level tasks.
Blocked state ÿ ready state (blocked state ÿ running state). After the blocked task is recovered (task recovery, delay time timeout, read semaphore timeout or read semaphore,
etc.), the recovered task will be added to the ready queue, thus changing from the blocked state to the ready state; at this time If the priority of the resumed task is higher than the priority
of the running task, a task switch will occur and the task will change from the ready state to the running state.
Machine Translated by Google
Ready stateÿBlocked state. The task may also be blocked (suspended) in the ready state. At this time, the task status changes from ready to
In the blocking state, the task is deleted from the ready queue and will not participate in task scheduling until the task is resumed.
Running stateÿReady state. After a higher-priority task is created or restored, task scheduling will occur. At this time, the highest-priority task in the ready queue changes to the
running state. Then the originally running task changes from the running state to the ready state and is still in the ready queue.
Running stateÿExit state. The running task ends and the task status changes from running to exited. The exit status includes the normal exit status at the end of the task and the
Invalid status. For example, if the task ends but does not delete itself, the Invalid status is displayed to the outside world, that is, the exit status.
Blocking stateÿExit state. The blocked task calls the delete interface, and the task status changes from blocked to exited.
The task ID is returned to the user through parameters when the task is created, and is an important identifier of the task. The ID number in the system is unique
of. Users can perform tasks such as task suspension, task recovery, and query task name on the specified task through the task ID.
Priority indicates the order in which tasks are executed. The priority of a task determines which task will be executed when a task switch occurs.
task, the highest priority task in the ready queue will be executed.
The task entry function is the function that will be executed after the new task is scheduled. This function is implemented by the user. When the task is created, it is passed
The task stack represents that each task has an independent stack space, which is called a task stack. The information saved in the stack space includes local variables, registers,
Task context is some resources used by tasks during running, such as registers, etc., which are called task context. When this task is suspended, other tasks continue to execute,
possibly modifying values in resources such as registers. If the task context is not saved when the task is switched, unknown errors may occur after the task is resumed. Therefore, when the
task is switched, the task context information of the switched task will be saved in its own task stack, so that after the task is resumed, the context information at the time of suspension can
be restored from the stack space, thereby continuing to execute the interrupted task during suspension. code.
Task control block TCB. Each task contains a task control block (TCB). TCB contains information such as task context stack pointer (stack pointer), task status, task priority, task
ID, task name, task stack size and other information. TCB can reflect the running status of each task.
Task switching includes actions such as obtaining the highest priority task in the ready queue, saving the task context when switching out, and restoring the task context when
switching in.
The task running mechanism is that when the user creates a task, the system will initialize the task stack and preset the context. In addition, the system will also
The address of the "task entry function" is placed in the corresponding location. In this way, when the task is started and enters the running state for the first time, the "task entry function"
will be executed.
During the running of the program, when a transaction occurs that needs to be processed immediately by the CPU, the CPU temporarily suspends the execution of the current
program and processes the transaction. This process is called an interrupt. When the hardware generates an interrupt, its corresponding interrupt handler is found through the interrupt
number, and the interrupt handler is executed to complete the interrupt processing.
Through the interrupt mechanism, when the peripheral does not require CPU intervention, the CPU can perform other tasks; when the peripheral requires the CPU, the CPU
will interrupt the current task to respond to the interrupt request. This can prevent the CPU from spending a lot of time on waiting and querying the status of peripherals, effectively
Interrupt number: A specific flag for the interrupt request signal. The computer can determine which device requested the interrupt based on the interrupt number.
beg.
Interrupt request: "Emergency" applies to the CPU (sends an electrical pulse signal) to request an interrupt, which requires the CPU to temporarily
Stop the currently executing task to handle the "emergency event". This process is called an interrupt request.
Interrupt priority: In order to enable the system to respond to and handle all interrupts in a timely manner, the system divides interrupt sources into several levels based on the importance and
Interrupt handler: When the peripheral issues an interrupt request, the CPU suspends the current task and responds to the interrupt request, that is
Execute the interrupt handler. Each device that generates an interrupt has a corresponding interrupt handler.
Interrupt triggering: The interrupt source sends an interrupt signal to the interrupt controller, and the interrupt controller arbitrates the interrupt, determines the priority, and sends the
interrupt signal to the CPU. When an interrupt source generates an interrupt signal, the interrupt trigger will be set to "1", indicating that the interrupt source has generated an interrupt and requires the
Interrupt vector table: A storage area that stores interrupt vectors. Interrupt vectors correspond to interrupt numbers. Interrupt vectors are stored in the interrupt vector table in the order of
interrupt numbers.
When an interrupt request is generated, the CPU suspends the current task and responds to the peripheral request. Users can register corresponding interrupt handlers as needed and
specify the specific operations performed by the CPU in response to interrupt requests. The interrupt module of the HarmonyOS LiteOS-M kernel provides the following functions. HalHwiCreate is
used to create interrupts, register interrupt numbers, interrupt trigger modes, interrupt priorities, and interrupt handlers. This interrupt handler is called when an interrupt is triggered. The HalHwiDelete
method can delete interrupts based on the specified interrupt number. LOS_IntUnLock can be used to turn on interrupts and enable all interrupt responses in the current processor. LOS_IntLock is
used to turn off interrupts and turn off all interrupt responses for the current processor. LOS_IntRestore is used to restore to the interrupt state before using LOS_IntLock and LOS_IntUnLock
operations.
The development process using the task interruption mechanism specifically includes the following steps:
2. Call the TestHwiTrigger interface to trigger the specified interrupt (this interface is defined in the test suite, and external interrupts are simulated by writing the relevant registers of the
interrupt controller. General peripheral devices do not need to perform this step).
3. Call the HalHwiDelete interface to delete the specified interrupt. This interface is used according to the actual situation to determine whether the interrupt needs to be deleted.
Supplementary instructions: Depending on the specific hardware, configure the maximum number of interrupts supported and the number of settable interrupt priorities. The interrupt
handler cannot take too long, otherwise it will affect the CPU's timely response to interrupts. During the interrupt response process, functions such as LOS_Schedule that cause scheduling cannot be
executed directly or indirectly. The input parameter of interrupt recovery LOS_IntRestore() must be the return value of the corresponding LOS_IntLock() (that is, the CPSR value before turning off the
interrupt). Interrupts 0-15 in Cortex-M series processors are for internal use, so users are not recommended to apply for and create them.
You can refer to the programming example below, which implements the following functions: creating interrupts; triggering interrupts; and deleting interrupts. code actual
The following shows how to create and delete interrupts. When the specified interrupt number HWI_NUM_TEST generates an interrupt, the interrupt processing function will be called:
#include "los_interrupt.h"
/*Create interrupt*/
Machine Translated by Google
#define HWI_NUM_TEST 7
UINT32 ret;
HWI_PRIOR_T hwiPrio = 3;
HWI_MODE_T mode = 0;
HWI_ARG_T arg = 0;
/*Create
interrupt*/ ret = HalHwiCreate(HWI_NUM_TEST, hwiPrio, mode, (HWI_PROC_FUNC)HwiUsrIrq,
arg); if(ret ==
LOS_OK){ printf("Hwi create
success!
\n"); } else { printf(" Hwi create failed!\n");
return LOS_NOK;
}
/* Delay 50 Ticks, when a hardware interrupt occurs, the function HwiUsrIrq will be called*/
LOS_TaskDelay(50);
/*Delete
interruption*/ ret =
HalHwiDelete(HWI_NUM_TEST); if(ret
==
LOS_OK){ printf("Hwi delete success!\n"); } else { printf("Hwi delete failed!\n");
return LOS_NOK;
}
return LOS_OK;
}
The memory management module manages the memory resources of the system. It is one of the core modules of the operating system and mainly includes
memory initialization, allocation and release. During the operation of the system, the memory management module manages the use of memory by users and the
OS through memory application/release, so as to optimize memory utilization and efficiency, and at the same time solve the system's memory fragmentation
problem to the greatest extent. The memory management of LiteOS-M of HarmonyOS is divided into static memory management and dynamic memory management.
Machine Translated by Google
Initialization, allocation, release and other functions. Dynamic memory allocates memory blocks of user-specified sizes in a dynamic memory pool. It has the
advantage of on-demand allocation, but may cause fragmentation in the memory pool. Static memory is a memory block of a preset (fixed) size that is allocated
during user initialization in a static memory pool. It has the advantages of high allocation and release efficiency and no fragmentation in the static memory pool,
but it can only apply for initialization of the preset size. Memory blocks cannot be applied for on demand.
Static memory is essentially a static array. The block size in the static memory pool is set during initialization. After initialization, the block size is
Small cannot be changed. The static memory pool consists of a control block LOS_MEMBOX_INFO and several memory blocks LOS_MEMBOX_NODE
of the same size. The control block is located at the head of the memory pool and is used for memory block management. It includes the memory block
size uwBlkSize, the number of memory blocks uwBlkNum, the number of allocated memory blocks uwBlkCnt and the free memory block list stFreeList. The
application and release of memory blocks are based on the block size. Each memory block contains a pointer pstNext pointing to the next memory block.
When users need to use fixed-length memory, they can obtain the memory through static memory allocation. Once used, the occupied memory is returned
The static memory management of HarmonyOS's LiteOS-M kernel provides the following APIs to implement various functions.
LOS_MemboxInit is used to initialize a static memory pool and set its starting address, total size and each memory block size according to the input parameters.
LOS_MemboxClr is used to clear the contents of the static memory block requested from the static memory pool.
LOS_MemboxAlloc can apply for a static memory block from the specified static memory pool. LOS_MemboxFree is used to release a static memory block
requested from the static memory pool. LOS_MemboxStatisticsGet can be used to obtain information about a specified static memory pool, including the total
number of memory blocks in the memory pool, the number of allocated memory blocks, and the size of each memory block. LOS_ShowBox can be used to print
all node information of the specified static memory pool (the printing level is LOS_INFO_LEVEL), including the starting address of the
memory pool, the size of the memory block, the total number of memory blocks, the starting address of each free memory block, and the starting address of all
The development process of typical scenarios using static memory is as follows. 1. Plan a memory area as a static memory pool. 2. Call LOS_MemboxInit
to initialize the static memory pool. Initialization will divide the memory area specified by the input parameter into N blocks (the N value depends on the total size
of the static memory and the block size), hang all memory blocks on the free linked list, and place the control header at the beginning of the memory. 3. Call the
LOS_MemboxAlloc interface to allocate static memory. The system will obtain the first free block from the free linked list and return the starting address of the
memory block. 4. Call the LOS_MemboxClr interface to clear the memory block corresponding to the input parameter address. 5. Call the LOS_MemboxFree
The following is a reference to a programming example that performs the following steps. First initialize a static memory pool, then apply for a piece of
static memory from the static memory pool, store a data in the memory block, then print out the data in the memory block, then clear the data in the memory block,
and finally release the memory block. The sample code is as follows:
Machine Translated by Google
#include "los_membox.h"
VOID Example_StaticMem(VOID) {
/*Memory pool
initialization*/ ret = LOS_MemboxInit(&boxMem[0], boxSize,
blkSize); if(ret !=
LOS_OK) { printf("Membox init failed!\n");
return;
} else
{ printf("Membox init success!\n");
}
/*assignment*/
*mem = 828;
printf("*mem = %d\n", *mem);
LOS_MemboxClr(boxMem, mem);
printf("Mem clear success \n *mem = %d\n", *mem);
/*Release
memory*/ ret = LOS_MemboxFree(boxMem,
mem); if (LOS_OK ==
ret) { printf("Mem free success!
\n"); }
else { printf("Mem free failed!\n");
}
return;
}
Dynamic memory management means that when memory resources are sufficient, memory blocks of any size are allocated from a relatively large contiguous memory (memory pool,
also heap memory) configured in the system according to user needs. When the user does not need the memory block, it can be released back to the system for next use. Compared with
static memory, the advantage of dynamic memory management is that it is allocated on demand, but the disadvantage is that fragmentation is prone to occur in the memory pool. Based on
the TLSF algorithm, HarmonyOS LiteOS-M dynamic memory optimizes the division of intervals to achieve better performance and reduce the fragmentation rate.
According to the size of the free memory block, multiple free linked lists are used to manage it. According to the memory free block size, it is divided into two parts: [4, 127] and
Divide the memory in the [4,127] interval equally, as shown in the green part of the figure above, into 31 small intervals, each of which
The memory block size should be a multiple of 4 bytes. Each small interval corresponds to a free memory linked list and a bit used to mark whether the corresponding free memory linked
list is empty. When the value is 1, the free linked list is not empty. The 31 small intervals of memory in the [4,127] interval correspond to 31 bits to mark whether the linked list is empty.
Free memory blocks larger than 127 bytes are managed by the free linked list according to the power of 2 interval size. It is divided into 24 small intervals in total, and each small
interval is equally divided into 8 secondary small intervals, see the blue Size Class and Size SubClass parts in the above figure. Each secondary cell corresponds to a free linked list and a
bit used to mark whether the corresponding free memory linked list is empty. There are a total of 24*8=192 secondary cell intervals, corresponding to 192 free linked lists and 192 bits to
For example, when there is 40 bytes of free memory that needs to be inserted into the free linked list, it corresponds to the cell interval [40,43], the 10th free linked list, and the
10th bit of the bitmap mark. Mount the 40-byte free memory on the 10th free linked list and determine whether the bitmap mark needs to be updated. When you need to apply for 40 bytes of
memory, obtain the memory that meets the requested size according to the bitmap mark.
Machine Translated by Google
Free linked list of storage blocks, and obtain free memory nodes from the free linked list. If the allocated node is larger than the memory size that needs to be applied for, the
node is split and the remaining nodes are remounted to the corresponding free linked list. When there is 580 bytes of free memory that needs to be inserted into the free linked
list, it corresponds to the second-level cell interval [2^9,2^9+2^6], the 31st+2*8=47 free linked list, and uses the bitmap The 47th bit marks whether the linked list is empty. Mount
580 bytes of free memory on the 47th free linked list, and determine whether the bitmap mark needs to be updated. When it is necessary to apply for 580 bytes of memory, obtain
a free linked list containing memory blocks that meet the requested size according to the bitmap mark, and obtain free memory nodes from the free linked list. If the allocated node
is larger than the memory size that needs to be applied for, the node is split and the remaining nodes are remounted to the corresponding free linked list. If the corresponding free
linked list is empty, a larger memory range is searched to see if there is a free linked list that meets the conditions. During actual calculation, a free linked list that meets the requested
The memory pool header part contains memory pool information, bitmap mark array and free list array. The memory pool information includes the starting address of the
memory pool, the total size of the heap area, and the memory pool attributes. The bitmap mark array consists of seven 32-bit unsigned integers, and each bit marks whether the
corresponding free linked list mounts a free memory block node. The free memory linked list contains 223 free memory head node information. Each free memory head node
information maintains the memory node head and the predecessor and successor free memory nodes in the free linked list.
The memory pool node section contains 3 types of nodes: unused free memory nodes, used memory nodes and tail nodes. Each memory node maintains a preorder
pointer that points to the previous memory node in the memory pool. It also maintains the size and usage mark of the memory node. The memory area behind free memory nodes
and used memory nodes is the data domain, and the tail node has no data domain.
The main job of dynamic memory management is to dynamically allocate and manage the memory range requested by the user. Dynamic memory management is mainly
used in scenarios where users need to use memory blocks of varying sizes. When users need to use memory, they can request a memory block of a specified size through the
dynamic memory application function of the operating system. Once used, the user can request a memory block of a specified size through the dynamic memory release function.
The specific interfaces involved in the dynamic memory module are as follows. LOS_MemInit is used to initialize a specified dynamic memory pool with a size of size.
Valid when LOSCFG_MEM_MUL_POOL is set. LOS_MemAlloc can apply for memory of size length from the specified dynamic memory pool. LOS_MemFree is used to release the
memory requested from the specified dynamic memory. LOS_MemRealloc can reallocate memory blocks according to size and copy the contents of the original memory block
to the new memory block. If the application for the new memory block is successful, the original memory block is released. LOS_MemAllocAlign can apply for memory with length size
and address aligned by boundary bytes from the specified dynamic memory pool. LOS_MemPoolSizeGet is used to obtain the total size of the specified dynamic memory pool.
LOS_MemTotalUsedGet is used to obtain the total usage size of the specified dynamic memory pool. LOS_MemInfoGet can be used to obtain the memory structure information
of the specified memory pool, including free memory size, used memory size, number of free memory blocks,
Machine Translated by Google
The number of used memory blocks and the maximum free memory block size. LOS_MemPoolList is used to print all memory pools that have been
initialized in the system, including the starting address of the memory pool, memory pool size, total free memory size, total used memory size, maximum
free memory block size, number of free memory blocks, used The number of memory blocks. open only
Valid when LOSCFG_MEM_MUL_POOL is set. LOS_MemFreeNodeShow can be used to print the size and number of free memory blocks in the
specified memory pool. LOS_MemUsedNodeShow can be used to print the size and number of used memory blocks of the specified memory pool.
LOS_MemIntegrityCheck is used to check the integrity of the specified memory pool, only open
The typical scenario development process using dynamic memory is as follows. First initialize LOS_MemInit, after initializing a memory pool
Generate a memory pool control head and tail node EndNode, and the remaining memory is marked as FreeNode memory node.
EndNode is the node at the end of the memory pool, with size 0. Then apply for dynamic memory LOS_MemAlloc of any
size, and determine whether there is free memory block space in the dynamic memory pool that is larger than the requested amount. If it exists, set
aside a memory block and return it in the form of a pointer. If it does not exist, return NULL. If it is free, If the memory block is larger than the requested
amount, the memory block needs to be divided, and the remaining part is mounted on the free memory list as a free memory block. Finally, the dynamic
memory LOS_MemFree is released and the memory block is recycled for next use. Calling LOS_MemFree to release the memory block will reclaim the
memory block and mark it as FreeNode. When reclaiming memory blocks, adjacent FreeNodes are automatically merged.
The following is a programming example using dynamic memory. This example performs the following steps: initialize a dynamic memory pool;
apply for a memory block from the dynamic memory pool; store a data in the memory block; print out the data in the memory block; release the memory
#include "los_memory.h"
VOID Example_DynMem(VOID) {
/*Initialize the
/*Allocate
/*assignment*/
*mem = 828;
Machine Translated by Google
/*Release
memory*/ ret = LOS_MemFree(g_testPool,
mem); if (LOS_OK ==
ret) { printf("Mem free success!
\n"); }
else { printf("Mem free failed!\n");
}
return;
}
3.2.4.1 Events
Event is a communication mechanism between tasks and can be used for synchronization operations between tasks. The characteristics of the event are:
ÿEvent synchronization between tasks can be one-to-many or many-to-many. One-to-many means that a task can wait for multiple things
Many-to-many means that multiple tasks can wait for multiple events. But a write event triggers at most one task to wake up from blocking.
Provides interfaces for event initialization, event reading and writing, event clearing, and event destruction.
1. Event initialization: An event control block will be created, which maintains a collection of processed events and a task list waiting for specific events.
2. Write events: The specified event will be written to the event control block. The event control block updates the event collection and traverses the task chain.
Table, depending on whether the task waits for specific conditions to be met, it is decided whether to wake up the relevant task.
3. Read event: If the read event already exists, it will be returned directly and synchronously. In other cases, the return timing will be determined based on the
timeout and event triggering: if the event condition to be waited for arrives before the timeout expires, the blocking task will be awakened directly, otherwise the task will
Whether the read event condition is met or not depends on the input parameters eventMask and mode. eventMask is the event that needs attention.
mode is the specific processing method, which is divided into the following three situations:
LOS_WAITMODE_AND: Indicates that it will return only when all events in eventMask occur.
LOS_WAITMODE_OR: Indicates that it will be returned when any event in eventMask occurs.
Machine Translated by Google
LOS_WAITMODE_CLR: After the event is read successfully, the corresponding read event will be cleared. Need to be used with
LOS_WAITMODE_AND or LOS_WAITMODE_OR.
4. Event clearing: Clear the event collection of the event control block according to the specified mask. When the mask is 0, it means that all event
collections are cleared. When the mask is 0xffff, it means not to clear any events and keep the event collection as it is.
reason.
There are only two states of a mutex lock at any time, unlocked or locked. When a task holds the mutex, the mutex is in a locked state, and the task
acquires ownership of the mutex. When the task releases it, the mutex is unlocked and the task loses ownership of the mutex. When a task holds a mutex lock,
other tasks will no longer be able to unlock or hold the mutex lock.
In a multi-tasking environment, there are often application scenarios where multiple tasks compete for the same shared resources. Mutex locks can be used to control shared resources.
Source protection to achieve exclusive access. In addition, mutex locks can solve the priority flipping problem of semaphores.
In a multi-tasking environment, there will be scenarios where multiple tasks access the same public resources, and some public resources are not shared
and require exclusive processing by tasks. How does a mutex lock avoid this conflict? When a mutex is used to handle synchronous access to non-shared
resources, if a task accesses the resource, the mutex is locked. At this time, if other tasks want to access this public resource, they will be blocked. Until the mutex
is released by the task holding the lock, other tasks can access the public resource again. At this time, the mutex is locked again, thus ensuring the same Only one
task is accessing this public resource at a time, ensuring the integrity of public resource operations.
Machine Translated by Google
Queue, also known as message queue, is a data structure commonly used for communication between tasks. The queue receives incoming requests from tasks or interrupts.
Fixed length messages, and determine whether the delivered message is stored in the queue space according to different interfaces.
The task can read messages from the queue. When the message in the queue is empty, the reading task is suspended; when there is a new message in the queue, the suspended reading
task is awakened and processes the new message. Tasks can also write messages to the queue. When the queue is full of messages, the writing task is suspended; when there are idle message
nodes in the queue, the suspended writing task is awakened and writes messages.
You can adjust the blocking mode of the read and write interface by adjusting the timeout of the read queue and write queue. If the timeout of the read queue and write queue is set to 0, the
task will not be suspended and the interface will return directly. This is non-blocking. model. On the contrary, if the timeout of both queue and write queue is set to a time greater than 0, it will run in
blocking mode.
Message queues provide an asynchronous processing mechanism that allows a message to be put into the queue but not processed immediately. At the same time, the queue also has the
function of buffering messages. You can use the queue to implement asynchronous communication of tasks. The queue has the following characteristics:
ÿMessages are queued in a first-in, first-out manner and support asynchronous reading and writing.
ÿBoth the read queue and the write queue support the timeout mechanism.
ÿEvery time a message is read, the message node will be set to idle.
ÿThe type of message sent is agreed upon by both communicating parties, and messages of different lengths (not exceeding the message node size of the queue) can be allowed.
interest.
ÿA task can receive and send messages from any message queue.
ÿMultiple tasks can receive and send messages from the same message queue. The queue space required when creating a queue, internal to the interface
1. When creating a queue, the queue ID will be returned if the queue is successfully created.
2. A message head node position Head and a message tail node position Tail are maintained in the queue control block, which are used to indicate the storage status of messages in the
current queue. Head represents the starting position of the occupied message node in the queue. Tail represents the end position of occupied message nodes and the starting position of idle
message nodes. When the queue is first created, both Head and Tail point to the starting position of the queue.
Machine Translated by Google
3. When writing to the queue, determine whether the queue can be written based on readWriteableCnt[1]. Write operations cannot be performed on the
queue that is full (readWriteableCnt[1] is 0). The write queue supports two writing methods: writing to the tail node of the queue, or writing to the head node of the queue. When the
tail node is written, the starting idle message node is found according to the Tail as the data writing object. If the Tail has pointed to the end of the queue, the winding method is used.
When the head node is written, the node preceding the Head is used as the data writing object. If the Head points to the starting position of the queue, the wraparound method is used.
4. When reading the queue, judge whether there are messages in the queue that need to be read based on readWriteableCnt[0]. Reading operations on all idle
(readWriteableCnt[0] is 0) queues will cause the task to hang. If the queue can read the message, the message node that first writes to the queue is found according to the Head and read.
If Head already points to the end of the queue, the wraparound method is used.
5. When deleting a queue, find the corresponding queue according to the queue ID, set the queue status to unused, set the queue control block to the initial state, and release
Figure 3-9 Schematic diagram of queue reading and writing data operations
The above figure illustrates the read and write queue. In the figure, only the writing method of the tail node is shown, and the writing method of the head node is not shown, but the
3.2.4.4 Semaphore
Semaphore is a mechanism for communication between tasks, which can achieve synchronization between tasks or mutually exclusive access to shared resources.
In the data structure of a semaphore, there is usually a count value used to count the number of effective resources, indicating the remaining available resources.
ÿ 0, indicating that the semaphore is currently unavailable, so there may be tasks waiting for the semaphore.
There are the following differences in usage between semaphores for synchronization and semaphores for mutual exclusion:
Machine Translated by Google
ÿWhen used as a mutual exclusion, the initial semaphore count value is not 0, indicating the number of available shared resources. Before you need to use a shared
resource, first obtain the semaphore, then use a shared resource, and release the semaphore after use. In this way, when the shared resources are fetched,
that is, when the semaphore count is reduced to 0, other tasks that need to obtain the semaphore will be blocked, thereby ensuring mutually exclusive
access to the shared resources. In addition, when the number of shared resources is 1, it is recommended to use a binary semaphore, a mechanism
ÿWhen used for synchronization, the initial semaphore count value is 0. Task 1 acquires the semaphore and blocks until task 2 or an interrupt releases the
semaphore, then task 1 can enter the Ready or Running state, thereby achieving synchronization between tasks.
1. Semaphore initialization, apply for memory for the configured N semaphores (the N value can be configured by the user, implemented through the
LOSCFG_BASE_IPC_SEM_LIMIT macro), and initialize all semaphores to unused, and add them to the unused linked list for system use.
2. Create a semaphore, obtain a semaphore from the unused semaphore list, and set the initial value.
3. When applying for a semaphore, if its counter value is greater than 0, it will be directly decremented by 1 and return success. Otherwise, the task is blocked
and waits for other tasks to release the semaphore. The waiting timeout can be set. When a task is blocked by a semaphore, the task is placed at the end of the semaphore
4. The semaphore is released. If there is no task waiting for the semaphore, the counter is directly increased by 1 and returned. Otherwise wake up the signal
5. Delete the semaphore, set the semaphore in use to an unused semaphore, and hang it back to the unused linked list. Semaphores allow multiple tasks to access
shared resources at the same time, but limit the maximum number of tasks that can access this resource at the same time. When the number of tasks accessing a resource
reaches the maximum number allowed by the resource, other tasks trying to obtain the resource will be blocked until a task releases the semaphore.
Time management is based on the system clock and provides all time-related services to applications.
The system clock is generated by an interrupt triggered by an output pulse generated by a timer/counter, and is generally defined as an integer or long integer
number. The period of the output pulse is called a "clock tick". The system clock is also called the time stamp or Tick.
Machine Translated by Google
The user measures time in seconds and milliseconds, while the operating system measures time in Tick. When the user needs to operate the system
During operation, such as task suspension, delay, etc., the time management module is required to convert Tick and seconds/milliseconds.
The HarmonyOS LiteOS-M kernel time management module provides time conversion and statistical functions. Time unit
Cycle, the smallest timing unit of the system. The length of Cycle is determined by the system main clock frequency.
The time unit Tick is the basic time unit of the operating system and is determined by the number of ticks per second configured by the user.
The user-defined callback function will be triggered after the clock count value. Timing accuracy is related to the period of the system Tick clock.
Hardware timers are limited by hardware, and the number is not enough to meet the actual needs of users. Therefore, in order to meet user needs and provide
more timers, the HarmonyOS LiteOS-M kernel provides software timer functions. Software timers expand the number of timers, allowing the creation of more scheduled
services.
ÿStatic cropping: The software timer function can be turned off through macro.
The software timer is a system resource. A piece of continuous memory has been allocated when the module is initialized. The system supports the most
The software timer uses a queue and a task resource of the system. The triggering of the software timer follows the queue rules. First
First in, first out. A timer with a short timing time is always closer to the head of the queue than a timer with a longer timing time, meeting the criterion of being triggered first.
The software timer uses Tick as the basic timing unit. When the user creates and starts a software timer, HarmonyOS
The LiteOS-M kernel will determine the expiration Tick time of the timer based on the current system Tick time and the user-set timing interval, and hang the timer
When the Tick interrupt arrives, scan the timing global linked list of the software timer in the Tick interrupt processing function to see if there is
The timer times out, if any, the timer that times out will be recorded.
After the Tick interrupt processing function ends, the software timer task (with the highest priority) is awakened, and is called in this task
Timer status:
1. OS_SWTMR_STATUS_UNUSED (unused): The system initializes all timer resources in the system to this state when the timer module is initialized.
2. OS_SWTMR_STATUS_CREATED (Creation is not started/stopped): The timer will change to this state after calling the LOS_SwtmrCreate
3. OS_SWTMR_STATUS_TICKING (counting): Call the LOS_SwtmrStart interface after the timer is created, and the timer will change to this state,
The software timer of the HarmonyOS LiteOS-M kernel provides three types of timer mechanisms:
The first type is a single-shot timer. This type of timer will only trigger a timer event once after it is started, and then the timer will be automatically deleted.
The second type is periodic trigger timer. This type of timer will trigger timer events periodically until the user manually stops the timer, otherwise it will
The third type is also a single-trigger timer, but the difference from the first type is that this type of timer will not be automatically deleted after it times out.
You need to call the timer deletion interface to delete the timer.
file closing, reading file content, writing content to file, setting file offset position, deleting file, renaming file, obtaining file information through file handle,
obtaining file information through file path name, Flash file content into storage device, create directory, open directory, read directory entry content, close directory,
delete directory, partition mount, partition unmount, partition unmount, and can be forced to unmount and obtain partition information through the MNT_FORCE
parameter.
3.2.7.1 FAT
FAT file system is the abbreviation of File Allocation Table, which mainly includes three areas: DBR area, FAT area, and DATA area. Among them,
each entry in the FAT area records the information of the corresponding cluster in the storage device, including whether the cluster is used, the number of the next
cluster in the file, whether the file ends, etc. The FAT file system has multiple formats such as FAT12, FAT16, and FAT32, among which 12, 16, and 32
represent the number of bytes of the FAT entry in the corresponding format. The FAT file system supports a variety of media, and is especially widely used on
removable storage media (U disk, SD card, mobile hard disk, etc.), allowing embedded devices to maintain good compatibility with desktop systems such
as Windows and Linux, making it easier for users to manage Manipulate files.
The LiteOS-M kernel of HarmonyOS supports the FAT file system in three formats: FAT12, FAT16 and FAT32. It has the characteristics of small
code size, small resource usage, cropability, support for multiple physical media, etc., and is compatible with Windows,
Systems such as Linux remain compatible and support functions such as multi-device and multi-partition identification. HarmonyOS's LiteOS-M kernel supports
multiple hard disk partitions and can create a FAT file system on primary partitions and logical partitions.
The use of the FAT file system requires the support of the underlying MMC related driver. On a board with MMC storage device
To run FATFS, you need: 1. Adapt the board-side EMMC driver to implement the disk_status, disk_initialize, disk_read, disk_write, and
disk_ioctl interfaces; 2. Add a new fs_config.h file and configure FS_MAX_SS (maximum sector size of the storage device) and FF_VOLUME_STRS (partition
There are following precautions when using FAT. FATFS file and directory operations: The size of a single file does not exceed 4G. The maximum number
of files supported to be opened simultaneously is FAT_MAX_OPEN_FILES, and the maximum number of folders supported is FAT_MAX_OPEN_DIRS. Root
directory management is not supported yet. File/directory names start with the partition name. For example, "user/testfile" is the file or directory named "testfile"
under the "user" partition. If you need to open the same file multiple times at the same time, you must use read-only mode (O_RDONLY).
Can only be opened once in writable mode (O_RDWR, O_WRONLY, etc.). Read and write pointers are not divided
Machine Translated by Google
For example, after opening a file in O_APPEND (append writing) mode, the read pointer is also at the end of the file, and the user needs to manually set it before
reading the file from the beginning. Permission management of files and directories is not supported yet. The stat and fstat interfaces currently do not support
querying the modification time, creation time and last access time. The Microsoft FAT protocol does not support times before 1980. FATFS partition mounting and
unmounting: supports mounting partitions with read-only attributes. When the input parameter of the mount function is MS_RDONLY, all interfaces with writing, such
as write, mkdir, unlink, and open with non-O_RDONLY attributes, will be rejected. mount supports modifying the permissions of mounted partitions through the
MS_REMOUNT flag. Before umount operation, make sure that all directories and files are closed. umount2 supports forcibly closing all files and folders and
umounting them through the MNT_FORCE parameter, but it may cause data loss, so please use it with caution. FATFS supports repartitioning and
formatting of storage device partitions, and the corresponding interfaces are fatfs_fdisk and fatfs_format: Before the fatfs_format operation, if the partition to be
formatted has been mounted, make sure that all directories and files in the partition are closed and the partition is umounted. Before fatfs_fdisk operation, all
partitions in the device need to be umounted. fatfs_fdisk and fatfs_format may cause device data loss, so please use them with caution.
The following is a programming example using FAT. The prerequisite for this example is that the system has mounted the MMC device partition to the user
Directory, this example implements the following functions: create the directory "user/test", create the file "file.txt" in the "user/test" directory, write "Hello
HarmonyOS!" at the beginning of the file, and flash the file content. In the device, set the offset to the starting position of the file, read the file content, close the file,
delete the file, and delete the directory. The code is implemented as follows:
#include <stdio.h>
#include <string.h>
#include "sys/stat.h"
#include "fcntl.h"
#include "unistd.h"
#define LOS_OK 0
#define LOS_NOK -1
int FatfsTest(void) {
int ret;
int fd = -1;
ssize_t len;
off_t off;
char dirName[20] = "user/test"; char
fileName[20] = "user/test/file.txt"; char
writeBuf[20] = " Hello HarmonyOS!"; char
readBuf[20] = {0};
if (fd < 0)
{ printf("open file failed.\n");
return LOS_NOK;
}
storage device*/
ret = fsync(fd); if (ret !=
LOS_OK) { printf("fsync
failed.\n"); return LOS_NOK;
}
/* Read the file content into readBuf, and the read length is the size of readBuf*/
} printf("%s\n", readBuf);
return LOS_OK;
}
Hello HarmonyOS!
3.2.7.2 LittleFS
LittleFS is mainly used on microcontrollers and flash. It is an embedded file system with the following three characteristics: power-off recovery. It can
be restored to the previous correct state even if it is reset or powered off during writing. Balanced erasing and writing effectively extends the service life of
There are two most classic power-off protection methods, one is to use logs, and the other is through COW. lfs combines the two methods, optimizes
the shortcomings of the two solutions, and provides a set of power-down protection strategies.
The schematic diagram of the power-down protection mechanism in log mode is as follows.
The specific steps are: 1. Before writing data, store the start flag in the log area and record the location and size of the data to be written; 2. Write the
data to be written into the log area; 3. Write the data to be written into the log area. Data area; 4. After writing is completed, the end flag is recorded in the log
area.
Simulate power-off scenario: Step 1 is completed, step 2 is not completed; after restarting, the original data is maintained and the log is invalid;
steps 1 and 2 are completed, step 3 is not completed, try to write the data of step 2 to the data area; steps 1, 2, and 3 are completed, but step 4 is not
completed. Also try to write the data in step 2 to the data area.
The schematic diagram of the power-down protection mechanism using the Cow mechanism is as follows.
Machine Translated by Google
The specific steps are: 1. If you want to update the data of node F, first apply for a new node and copy the old data of F.
Then update the new data; 2. Point the pointer of the parent node to the new node and remove the pointer of the old node.
Simulate a power outage scenario: If step 1 is completed, but step 2 is not completed, the old data will be used and the new node will become an
orphan node.
lfs combines the log method and the COW mechanism for power-off protection, and optimizes the two solutions. We talked about the three elements
of the file system earlier, super block, inode, and data. For lfs, it stores super blocks and inodes in the form of logs. The two use a unified storage structure,
which will be called metadata later on. Ordinary data is stored in the form of cow and uses the czt reverse-order linked list.
Metadata (corresponding to root, dir) is stored in double blocks, which back up each other. Each block has a revision serial number.
The larger the value, the newer the data of the block. By default, each block can store up to 0xff files. Data, if it exceeds this value, needs to be
compacted. compact has the following functions. When the size of the data is larger than a certain value, it integrates the data, removes old
data with the same ID, and then writes it into the backup block.
Ordinary data storage Lfs data is reversely managed in the form of a linked list. The schematic diagram is as follows. Using reverse
pointers, regular appending of data does not require additional overhead to re-establish all indexes; each even block has multiple pointers
Development cases using LittleFS are as follows. To transplant LittleFS to a new hardware device, you need to declare lfs_config:
// block device
configuration .read_size
= 16, .prog_size =
16, .block_size =
4096, .block_count =
128, .cache_size = 16, .lookahead_size = 16, .block_cycles = 500,
};
Among them, .read, .prog, .erase, and .sync respectively correspond to the underlying read, write, erase, and synchronization interfaces on the hardware platform. read_size indicates the number of bytes read each
time, which can be larger than the physical read unit to improve performance. This value determines the size of the read cache, but a value that is too large will cause more memory consumption. prog_size is the number of bytes
written each time, which can be larger than the physical write unit to improve performance. This value determines the size of the write cache and must be an integer multiple of read_size, but a value that is too large will cause more
memory consumption. block_size is the number of bytes per erase block, which can be larger than the physical erase unit, but this value should be as small as possible because each file will occupy at least one block. Must be an
integer multiple of prog_size. block_count represents the number of blocks that can be erased, depending on the capacity of the block device and the size of the erase block.
int main(void) { //
mount the filesystem int
err = lfs_mount(&lfs, &cfg);
{ lfs_format(&lfs, &cfg);
lfs_mount(&lfs, &cfg);
}
// remember the storage is not updated until the file is closed successfully
lfs_file_close(&lfs, &file);
Interrupt refers to the process by which the CPU suspends the execution of the current program and switches to a new program when necessary. That is, during the running
of the program, a transaction occurs that must be processed immediately by the CPU. At this time, the CPU temporarily suspends the execution of the current program and processes
the transaction. This process is called an interrupt. Through the interrupt mechanism, the CPU can avoid spending a lot of time on waiting and querying the status of peripherals, greatly
Exception handling is a series of actions by the operating system to handle abnormal situations (chip hardware exceptions) that occur during operation, such as virtual
memory page missing exceptions, call stack information of functions when printing exceptions occur, CPU scene information, task stack conditions, etc. .
Peripherals can complete certain tasks without CPU intervention, but in some cases the CPU is also required to perform certain tasks for them. Through the interrupt mechanism,
when the peripheral does not require CPU intervention, the CPU can perform other tasks, and when the peripheral requires the CPU, an interrupt signal is generated, which is connected
to the interrupt controller. The interrupt controller receives input from interrupt pins of other peripherals on the one hand, and on the other hand it sends interrupt signals to the CPU. The
interrupt controller can be programmed to turn on and off interrupt sources, set the priority and trigger mode of the interrupt source. Commonly used interrupt controllers include VIC
(Vector Interrupt Controller) and GIC (General Interrupt Controller). The interrupt controller used in ARM Cortex-A7 is GIC. After receiving the interrupt signal sent by the
interrupt controller, the CPU interrupts the current task to respond to the interrupt request.
Exception handling is something that can interrupt the normal running process of the CPU, such as undefined instruction exceptions, trying to modify read-only data exceptions,
misaligned address access exceptions, etc. When an exception occurs, the CPU suspends the current program, handles the exception event first, and then continues execution of the
Taking the ARMv7-a architecture as an example, the entry point for interrupt and exception handling is the interrupt vector table. The interrupt vector table contains the entry
3.3.2.1 Process
A process is the smallest unit of system resource management. The process module provided by the LiteOS-A kernel of HarmonyOS is mainly used
to realize the isolation of user-mode processes. The kernel mode is regarded as a process space, and there are no other processes (except KIdle. The KIdle
process is an idle process provided by the system and shares one with KProcess. process space).
The process module of HarmonyOS mainly provides users with multiple processes, realizes switching and communication between processes, and
helps users manage business program processes. The process of HarmonyOS adopts a preemptive scheduling mechanism and adopts a scheduling
algorithm of high priority first + same-priority time slice rotation. HarmonyOS processes have a total of 32 priorities (0-31), and user processes can configure
22 priorities (10-31). The highest priority is 10, and the lowest priority is 31. High-priority processes can preempt low-priority processes, and low-priority
processes must be blocked or terminated before they can be scheduled. Each user-mode process has its own independent process space and is invisible to
each other, achieving isolation between processes. The user-mode root process init is created by the kernel mode, and other user-mode child processes are
Process status description: Initialization Init means the process is being created; Ready means the process is in the ready list, waiting for CPU
scheduling; Running means the process is running; Blocking Pending means the process is blocked and suspended; all threads in this process are blocked.
When the process is blocked and hung, Zombies indicate that the process has finished running and is waiting for the parent process to reclaim its control
block resources.
ÿ InitÿReady:
When a process is created or forked, it enters the Init state after obtaining the process control block and is in the process initialization phase. When the process
After the initialization is completed, the process is inserted into the scheduling queue, and the process enters the ready state.
ÿ ReadyÿRunning:
After the process is created, it enters the ready state. When a process switch occurs, the process with the highest priority in the ready list is executed.
Enter running state. If no other threads in the process are in the ready state at this time, the process is deleted from the ready list and is only in the running
state; if there are other threads in the process in the ready state at this time, the process is still in the ready queue. The ready state and running state of the
process coexist, but the process state presented to the outside world is the running state.
Machine Translated by Google
ÿ RunningÿPending:
When the last thread of a process enters the blocked state, all threads in the process are in the blocked state. At this time, the process progresses synchronously.
ÿ PendingÿReady:
When any thread in the blocked process returns to the ready state, the process is added to the ready queue and synchronously changes to the ready state.
ÿ ReadyÿPending:
When the last ready thread in the process changes to the blocking state, the process is removed from the ready list, and the process changes from the ready state to the blocking state.
ÿ RunningÿReady:
There are two situations when a process changes from running state to ready state:
1. After a process with a higher priority is created or restored, process scheduling will occur. At this moment, the highest priority in the ready list
The process changes to the running state, then the originally running process changes from the running state to the ready state.
2. If the scheduling policy of the process is LOS_SCHED_RR, and there is another process with the same priority in the ready state, then after the time slice of the process is consumed, the
process changes from the running state to the ready state, and another process with the same priority Convert from ready state to running state.
ÿ RunningÿZombies: When the main thread or all threads of the process finish running, the process changes from running state to zombies
state, waiting for the parent process to reclaim resources.
The process module provided by HarmonyOS is mainly used to realize the isolation of user-mode processes, and supports functions such as creation, exit, resource recycling, setting/getting
scheduling parameters, getting process ID, setting/getting process group ID, etc. of user-mode processes. The user mode process comes by forking the parent process. When the fork process is executed,
the process virtual memory space of the parent process will be cloned to the child process. When the child process is actually running, the content of the parent process will be copied to the virtual
memory space of the child process on demand through the copy-on-write mechanism. memory space. The process is just a resource management unit. The actual operation is completed by each thread in
the process. When threads in different processes switch to each other, the process space will be switched.
Machine Translated by Google
3.3.2.2 Threads
From a system perspective, a thread is the smallest running unit that competes for system resources. Threads can use or wait for the CPU, use
system resources such as memory space, and run independently of other threads. In the HarmonyOS kernel, threads in processes with the same priority
are scheduled and run uniformly. Threads in the HarmonyOS kernel adopt a preemptive scheduling mechanism and support time slice rotation scheduling
and FIFO scheduling. The threads of the HarmonyOS kernel have a total of 32 priorities (0-31), with the highest priority being 0 and the lowest priority being
31. Within the current process, high-priority threads can preempt low-priority threads, and low-priority threads must be blocked or terminated before they
can be scheduled.
Thread status description: Init indicates that the thread is being created; Ready indicates that the thread is in the ready list, waiting for CPU
scheduling; Running indicates that the thread is running; Blocked indicates that the thread is blocked and suspended. The Blocked status includes pending
(because of lock, event, semaphore, etc. blocking), suspended (active pend), delay (delay blocking), pendtime (because of lock, event, semaphore
time, etc. timeout waiting); exiting Exit means that the thread has finished running and waiting The parent thread reclaims its control block resources.
Machine Translated by Google
created and gets the control block, it is the initialization stage (Init state). When the thread initialization is completed, the thread is inserted into the scheduling queue.
ÿ ReadyÿRunning:
After the thread is created, it enters the ready state. When a thread switch occurs, the thread with the highest priority in the ready list is executed.
Entering the running state, the thread is removed from the ready list at this time.
ÿ RunningÿBlocked:
When a running thread is blocked (suspended, delayed, read semaphore, etc.), the thread state changes from running state to blocked
state, and then a thread switch occurs, and the highest priority thread remains in the running ready list.
ÿ BlockedÿReady:
After the blocked thread is recovered (thread recovery, delay time timeout, read semaphore timeout or read semaphore, etc.), at this time
The resumed thread will be added to the ready list, thereby changing from the blocked state to the ready state.
ÿ ReadyÿBlocked:
The thread may also be blocked (suspended) in the ready state. At this time, the thread state will change from the ready state to the blocked state. The thread
The process is removed from the ready list and will not participate in thread scheduling until the thread is resumed.
ÿ RunningÿReady:
After a higher priority thread is created or restored, thread scheduling will occur. At this time, the highest priority thread in the ready list becomes
is in the running state, then the originally running thread changes from the running state to the ready state and is added to the ready list.
ÿ RunningÿExit:
The running thread ends and the thread state changes from running state to exit state. If the separation attribute is set for
The HarmonyOS thread management module provides thread creation, thread delay, thread suspension and thread recovery, lock thread scheduling and
unlocking thread scheduling, and query thread control block information based on ID. When a user creates a thread, the system will initialize the thread stack and
preset the context. In addition, the system will also place the address of the "thread entry function" in the corresponding location. In this way, when the thread starts
to enter the running state for the first time, the thread entry function will be executed.
Machine Translated by Google
3.3.2.3 Scheduler
The LiteOS-A kernel provides a preemptive scheduling mechanism of high-priority priority + same-priority time slice rotation. The system starts from
Start running forward based on the timeline of real time, which makes the scheduling algorithm have good real-time performance. The scheduling algorithm naturally
embeds the tickless mechanism into the scheduling algorithm. On the one hand, the system has lower power consumption. On the other hand, it also enables tick interrupts
to respond on demand, reducing useless tick interrupt responses, and further improving the real-time performance of the system. Process scheduling strategy support
SCHED_RR, thread scheduling policy supports SCHED_RR and SCHED_FIFO. The smallest unit of scheduling is a thread.
Using the method of process priority queue + thread priority queue, the process priority range is 0-31, there are 32 process priority bucket queues, each bucket
queue corresponds to a thread priority bucket queue; the thread priority range is also 0 -31, a thread priority bucket queue also has 32 priority queues.
Scheduling starts after the system starts the kernel initialization. The processes or threads created during operation will be added to the scheduling queue. The
system selects the optimal thread for scheduling and running based on the priority of the process and thread and the time slice consumption of the thread. Once the thread is
Once scheduled, it will be deleted from the scheduling queue. If a thread is blocked during running, it will be added to the corresponding blocking queue and trigger a schedule
to schedule other threads to run. If there is no thread that can be scheduled on the scheduling queue, the system will select the thread of the KIdle process for scheduling and
running.
Machine Translated by Google
The memory management module manages the memory resources of the system. It is one of the core modules of the operating system and mainly includes the initialization of the memory.
ize, allocate, and release. The heap memory management of HarmonyOS's LiteOS-A kernel provides memory initialization, allocation, release and other functions. During the operation of the
system, the heap memory management module manages the use of memory by users and the OS through memory application/release, so as to optimize memory utilization and efficiency, and
at the same time solve the system's memory fragmentation problem to the greatest extent.
Heap memory management means that when memory resources are sufficient, memory blocks of any size are allocated from a relatively large contiguous memory (memory pool, also heap
memory) configured in the system according to user needs. When the user does not need the memory block, it can be released back to the system for next use. Compared with static memory, the
advantage of dynamic memory management is that it is allocated on demand, but the disadvantage is that fragmentation is prone to occur in the memory pool. HarmonyOS's LiteOS-A heap memory
optimizes the division of intervals based on the TLSF algorithm to achieve better performance and reduce the fragmentation rate.
Machine Translated by Google
According to the size of the free memory block, multiple free linked lists are used to manage it. According to the memory free block size, it is divided into two parts: [4, 127]
Divide the memory in the [4,127] interval equally, as shown in the green part of the figure above, into 31 small intervals, each of which
The memory block size should be a multiple of 4 bytes. Each small interval corresponds to a free memory linked list and a bit used to mark whether the corresponding free memory
linked list is empty. When the value is 1, the free linked list is not empty. The 31 small intervals of memory in the [4,127] interval correspond to 31 bits to mark whether the linked list is
empty.
Free memory blocks larger than 127 bytes are managed by the free linked list according to the power of 2 interval size. It is divided into 24 small intervals in total, and each small
interval is equally divided into 8 secondary small intervals, see the blue Size Class and Size SubClass parts in the above figure. Each secondary cell corresponds to a free linked list
and a bit used to mark whether the corresponding free memory linked list is empty. There are a total of 24*8=192 secondary cell intervals, corresponding to 192 free linked lists and 192
For example, when there is 40 bytes of free memory that needs to be inserted into the free linked list, it corresponds to the cell interval [40,43], the 10th free linked list, and the
10th bit of the bitmap mark. Mount the 40-byte free memory on the 10th free linked list and determine whether the bitmap mark needs to be updated. When it is necessary to apply for 40
bytes of memory, the free linked list containing the memory block that meets the requested size is obtained according to the bitmap mark, and the free memory node is obtained from
the free linked list. If the allocated node is larger than the memory size that needs to be applied for, the node is split and the remaining nodes are remounted to the corresponding free
linked list. When there is 580 bytes of free memory that needs to be inserted into the free linked list, it corresponds to the second-level cell interval [2^9,2^9+2^6], the 31st+2*8=47
free linked list, and uses the bitmap The 47th bit marks whether the linked list is empty. Mount 580 bytes of free memory on the 47th free linked list, and determine whether the bitmap
mark needs to be updated. When it is necessary to apply for 580 bytes of memory, obtain a free linked list containing memory blocks that meet the requested size according to the
bitmap mark, and obtain free memory nodes from the free linked list. If the allocated node is larger than the memory size that needs to be applied for, the node is split and the remaining
nodes are remounted to the corresponding free linked list. If the corresponding free linked list is empty, a larger memory range is searched to see if there is a free linked list that meets the
conditions. During actual calculation, a free linked list that meets the requested size will be found at one time.
Machine Translated by Google
The memory pool header part contains memory pool information, bitmap mark array and free list array. The memory pool information includes the starting address of the memory pool,
the total size of the heap area, and the memory pool attributes. The bitmap mark array consists of seven 32-bit unsigned integers, and each bit marks whether the corresponding free linked list
mounts a free memory block node. The free memory linked list contains 223 free memory head node information. Each free memory head node information maintains the memory node head
and the predecessor and successor free memory nodes in the free linked list.
The memory pool node section contains 3 types of nodes: unused free memory nodes, used memory nodes and tail nodes. Each memory node maintains a preorder pointer that
points to the previous memory node in the memory pool. It also maintains the size and usage mark of the memory node. The memory area behind free memory nodes and used memory nodes is
the data domain, and the tail node has no data domain.
Physical memory is one of the most important resources on the computer. It refers to the memory space provided by the actual memory device and can be directly addressed
through the CPU bus. Its main function is to provide temporary storage space for the operating system and programs. The LiteOS-A kernel manages physical memory through paging. Except for
a part of the memory occupied by the kernel heap, the rest of the available memory is divided into page frames in units of 4k. Memory allocation and memory recycling are performed in units
of page frames. The kernel uses the buddy algorithm to manage free pages, which can reduce a certain memory fragmentation rate and improve the efficiency of memory allocation and
release. However, a small block often blocks the merger of a large block, resulting in the inability to allocate larger memory blocks.
The operating mechanism is shown in the figure below. The physical memory usage distribution view of the LiteOS-A kernel is mainly composed of kernel image, kernel heap and
physical pages.
The buddy algorithm divides all free page frames into 9 memory block groups. The memory block in each group contains 2 to the power of page frames. For example: the memory
block of group 0 contains 2 to the power of 0 page frames, that is, 1 Page frames; the memory block of group 8 contains 2 to the power of 8 page frames, that is, 256 page frames. Memory
blocks of the same size are hung on the same linked list for management.
When the memory system applies for 12k memory, that is, 3 page frames, the linked list with index 3 in the 9 memory block groups has a memory block with a size of 8 page frames,
which meets the requirements. After allocating 12k memory, there is still 20k memory left. That is, 5 page frames, divide the 5 page frames into the sum of powers of 2, that is, 4 and 1, and
try to find a partner for merging. If the memory block of 4 page frames has no partner, it is directly inserted into the linked list with index 2. Continue to search whether the memory block of 1
page frame has a partner. There is 1 partner on the linked list with index 0. If two memory blocks If the addresses are consecutive, they will be merged and the memory block will be linked to
Release memory: The system releases 12k memory, that is, 3 page frames. Divide the 3 page frames into the sum of powers of 2, that is, 2 and 1. Try to
find a partner for merging. There is 1 memory block on the linked list with index 1. , if the addresses are continuous, merge and hang the merged memory block on
the linked list with index 2. There is also 1 on the linked list with index 0. If the addresses are continuous, merge and hang the merged memory block on the linked
list. Go to the linked list with index 1. At this time, continue to determine whether there is a partner and repeat the above operation.
Virtual memory management is a technology for computer systems to manage memory. Each process has a continuous virtual address space.
The size of the virtual address space is determined by the number of bits of the CPU. The maximum addressing space that a 32-bit hardware platform can
provide is 0-4G. The entire 4G space is divided into two parts. The LiteOS-A kernel occupies the high address space of 3G, and the low address space of 1G is
reserved for processes. The virtual address space of each process space is independent, and the code and data do not affect each other.
The system divides the virtual memory into memory blocks called virtual pages. The size is generally 4k or 64k. The LiteOS-A kernel defaults to
The page size is 4k, and the MMU (Memory Management Units) can be configured as needed. The smallest unit of virtual memory management operation is a
page. A virtual address range region in the LiteOS-A kernel contains address links.
Machine Translated by Google
Multiple virtual pages can be continuous, or there can be only one page. Similarly, physical memory will also be divided according to page size, and each divided
A block of memory is called a page frame. Virtual address space division: The kernel state occupies the high address 3G (0x40000000 ~ 0xFFFFFFFF), use
The household status occupies a low address of 1G (0x01000000 ~ 0x3F000000). See the table below for details. You can view or configure the details.
los_vm_zone.h.
DMA zone 0x40000000 0x43FFFFFF USB, network, etc. dma memory access Uncache
Normal
0x80000000 0x83FFFFFF Kernel code, data segment, heap memory and stack Cache
zone
Code segment 0x0200000 0x09FFFFFF user mode code segment address space Cache
0x0FC00000(starting
heap 0x17BFFFFF user mode heap address space Cache
Starting address is random)
0x3EFFFFFF(
Stack 0x37000000 The starting address is User mode stack space address Cache
machine)
In virtual memory management, the virtual address space is continuous, but the physical memory mapped by it is not necessarily continuous, such as
As shown below. When the executable program is loaded and run, there are two situations when the CPU accesses the code or data in the virtual address space:
The page where the virtual address accessed by the CPU is located, such as V0, has been mapped with the specific physical page P0. The CPU finds
The page table entry corresponding to the process (see the section on virtual and real mapping for details), accesses the physical memory according to the physical address information in the page table entry
The page where the virtual address accessed by the CPU is located, such as V2, is not mapped to a specific physical page, and the system will trigger a page fault exception.
Normally, the system applies for a physical page, copies the corresponding information to the physical page, and updates the starting address of the physical page to
page table entry. At this time, the CPU can re-execute the instruction to access the virtual memory to access the specific code or data.
Machine Translated by Google
Virtual-real mapping means that the system maps the virtual address of the process space to the actual physical address through the Memory Management Unit (MMU), and
specifies the corresponding access permissions, cache attributes, etc. When the program is executed, the CPU accesses virtual memory, finds the corresponding physical memory through
the MMU page table entries, and performs corresponding code execution or data reading and writing operations. The mapping of the MMU is described by the page table (full name Page
Table), which stores the mapping relationship between virtual addresses and physical addresses, as well as access permissions, etc. Each process creates a page table when it is created.
The page table is composed of page table entries (PTE). Each page table entry describes the mapping relationship between the virtual address interval and the physical address interval. There
is a page table cache in the MMU, called the fast table (TLB, full name: Translation Lookaside Buffers). When doing address translation, the MMU first searches in the TLB. If the
corresponding page table entry is found, it can be directly converted, which improves query efficiency.
Virtual-real mapping is actually a process of establishing a page table. The MMU has multi-level page tables, and the LiteOS-A kernel uses a second-level page table to describe
the process space. Each first-level page table entry descriptor occupies 4 bytes and can represent the mapping relationship of 1MiB of memory space, that is, 1024 virtual memory spaces are
required for 1GiB of user space (user space in the LiteOS-A kernel occupies 1GiB). When the system creates a user process, it applies for a 4KiB memory block in the memory as the storage
area for the first-level page table. The second-level page table makes dynamic memory applications according to the needs of the current process.
When the user program is loaded and started, the code segments and data segments will be mapped into the virtual memory space (for details, please refer to Dynamic Loading and
(Link section), there is no physical page for actual mapping at this time; when the program is executed, as shown by the thick arrow in the figure below, the CPU accesses the virtual
address and checks whether there is a corresponding physical memory through the MMU. If the virtual address does not have a corresponding physical memory, The address triggers
Machine Translated by Google
In the event of a page missing exception, the kernel applies for physical memory and writes the virtual-real mapping relationship and corresponding attribute configuration information into the page
table, and caches the page table entries to the TLB. Then the CPU can directly access the actual physical memory through the conversion relationship; if the CPU access has been Cache to
Page table entries in the TLB eliminate the need to access the page table stored in memory, speeding up lookups.
3.3.4.1 Events
Event is a communication mechanism between tasks and can be used for synchronization between tasks. In a multi-tasking environment, synchronization operations are often required
between tasks. A wait is a synchronization. Events can provide one-to-many and many-to-many synchronization operations. One-to-many synchronization model: one task waits for the triggering of
multiple events. The task processing event can be awakened when any event occurs, or the task processing event can be awakened only after several events occur. Many-to-many
synchronization model: multiple tasks wait for the triggering of multiple events.
Tasks create event control blocks to trigger events or wait for events. The events are independent of each other and are internally implemented as a 32
Bit unsigned integer, each bit identifies an event type. Bit 25 is unavailable, so up to 31 event types are supported. Events are only used for synchronization between tasks and do not
provide data transmission functions. Writing the same event type to the event control block multiple times is equivalent to writing it once before being cleared. Multiple tasks can read and write the
ÿEvent initialization: An event control block will be created, which maintains a collection of processed events and a task list waiting for specific events.
ÿWrite event: The specified event will be written to the event control block. The event control block updates the event collection and traverses the task linked list. It decides whether to wake up
the relevant tasks based on the specific conditions for the task to wait for it to be met.
Machine Translated by Google
ÿRead event: If the read event already exists, it will be returned directly and synchronously. In other cases, the return timing will be determined based on the timeout
and event triggering: if the event condition to be waited for arrives before the timeout expires, the blocking task will be awakened directly, otherwise the task
Whether the read event condition is met or not depends on the input parameters eventMask and mode. eventMask is the event that needs attention.
mode is the specific processing method, which is divided into the following three situations:
LOS_WAITMODE_AND: Indicates that it will return only when all events in eventMask occur.
LOS_WAITMODE_OR: Indicates that it will be returned when any event in eventMask occurs.
LOS_WAITMODE_CLR: After the event is read successfully, the corresponding read event will be cleared. Need to be used with
LOS_WAITMODE_AND or LOS_WAITMODE_OR.
ÿEvent clearing: Clear the event collection of the event control block according to the specified mask. When the mask is 0, it means clearing all event collections.
When the mask is 0xffff, it means not to clear any events and keep the event collection as it is.
3.3.4.2 Semaphore
Semaphore is a mechanism for communication between tasks, which can achieve synchronization between tasks or mutually exclusive access to shared
resources.
In the data structure of a semaphore, there is usually a count value used to count the number of effective resources, indicating the remaining available resources.
ÿ 0, indicating that the semaphore is currently unavailable, so there may be tasks waiting for the semaphore.
There are the following differences in usage between semaphores for synchronization and semaphores for mutual exclusion:
Machine Translated by Google
ÿWhen used as a mutual exclusion, the initial semaphore count value is not 0, indicating the number of available shared resources. Before you need to use a shared
resource, first obtain the semaphore, then use a shared resource, and release the semaphore after use. In this way, when the shared resources are fetched,
that is, when the semaphore count is reduced to 0, other tasks that need to obtain the semaphore will be blocked, thereby ensuring mutually exclusive
access to the shared resources. In addition, when the number of shared resources is 1, it is recommended to use a binary semaphore, a mechanism
ÿWhen used for synchronization, the initial semaphore count value is 0. Task 1 acquires the semaphore and blocks until task 2 or an interrupt releases the
semaphore, then task 1 can enter the Ready or Running state, thereby achieving synchronization between tasks.
A semaphore allows multiple tasks to access a shared resource at the same time, but it limits the maximum number of tasks that can access this resource at the same time.
number. When the number of tasks accessing a resource reaches the maximum number allowed by the resource, other tasks trying to obtain the resource will be blocked until
ÿSemaphore initialization: Apply for memory for the configured N semaphores during initialization (the N value can be configured by the user and implemented
through the LOSCFG_BASE_IPC_SEM_LIMIT macro), and initialize all semaphores to be unused, and add them to the unused linked list for system use
ÿSemaphore creation: Get a semaphore from the unused semaphore list and set the initial value.
ÿSemaphore application: If its counter value is greater than 0, decrement it by 1 directly and return success. Otherwise, the task is blocked and waits for other tasks
to release the semaphore. The waiting timeout can be set. When a task is blocked by a semaphore, the task is placed at the end of the semaphore waiting
task queue.
ÿSemaphore release: If there is no task waiting for the semaphore, directly add 1 to the counter and return. Otherwise wake up the semaphore
ÿSemaphore deletion: Set the semaphore in use to an unused semaphore and hang it back to the unused linked list.
ownership of the mutex. The task loses ownership of the mutex when the task releases it. When a task holds a mutex lock, other tasks can no longer hold the mutex lock. In a multi-
tasking environment, there are often application scenarios where multiple tasks compete for the same shared resource. Mutex locks can be used to protect shared resources to
The mutex attribute contains 3 attributes: protocol attribute, priority upper limit attribute and type attribute. Protocol attributes are used to process
Tasks of different priorities apply for mutex locks, and the protocol attributes include the following three types:
ÿ LOS_MUX_PRIO_NONE
The priority of the task that applies for the mutex lock is not inherited or protected.
ÿ LOS_MUX_PRIO_INHERIT
Priority inheritance attribute, set to this attribute by default, inherits the priority of the task that applies for the mutex lock. When the mutex
lock is set to this protocol attribute, when applying for the mutex lock, if a high-priority task is blocked in the mutex lock, the priority of the task
holding the mutex lock will be backed up to the priority bitmap of the task control block. , and then set the task priority to the same priority as the
high-priority task; when the task holding the mutex lock releases the mutex lock, the task priority is restored from the priority bitmap of the task control block.
ÿ LOS_MUX_PRIO_PROTECT
The priority protection attribute protects the priority of the task that applies for the mutex lock. When the mutex lock is set to this protocol attribute,
when applying for the mutex lock, if the task priority is less than the upper limit of the mutex lock priority, the task priority is backed up to the priority bitmap
of the task control block, and then the task priority is The priority is set to the mutex priority upper limit attribute value; when the mutex is released, the
task priority is restored from the priority bitmap of the task control block.
The type attribute of the mutex lock is used to mark whether to detect deadlock and whether to support recursive holding. The type attribute includes the following three types:
ÿ LOS_MUX_NORMAL
Ordinary mutex locks will not detect deadlocks. If a task attempts to hold a mutex repeatedly, it will cause a deadlock on this thread. If you
try to release a mutex held by another task, or if a task tries to release the mutex repeatedly, unpredictable results will occur.
ÿ LOS_MUX_RECURSIVE
Recursive mutex, this property is set by default. When the mutex lock is set to this type of attribute, the same task is allowed to hold the mutex lock
multiple times. Only when the number of times the lock is held and the number of times the lock is released are the same, can other tasks hold the mutex
lock. If you try to hold a mutex that is already held by another task, or if you try to release a mutex that has already been released, an error code will be
returned.
ÿ LOS_MUX_ERRORCHECK
Error detection mutex lock, deadlock will be automatically detected. When a mutex is set to this type of attribute, if a task attempts to hold a
mutex repeatedly, or attempts to release a mutex held by another task, or if a task attempts to release a mutex that has already been released, If
In a multi-tasking environment, there will be scenarios where multiple tasks access the same public resources, and some public resources are not
shared and require exclusive processing by tasks. How does a mutex lock avoid this conflict?
When a mutex is used to handle synchronous access to non-shared resources, if a task accesses the resource, the mutex is locked. At
this time, if other tasks want to access this public resource, they will be blocked. Until the mutex is released by the task holding the lock, other tasks
can access the public resource again. At this time, the mutex is locked again, thus ensuring the same Only one task is accessing this public
Queue, also known as message queue, is a data structure commonly used for communication between tasks. The queue receives incoming requests from tasks or interrupts.
Fixed length messages, and determine whether the delivered message is stored in the queue space according to different interfaces.
The task can read messages from the queue. When the message in the queue is empty, the reading task is suspended; when there is a new message in the queue, the suspended reading
task is awakened and processes the new message. Tasks can also write messages to the queue. When the queue is full of messages, the writing task is suspended; when there are idle message
nodes in the queue, the suspended writing task is awakened and writes messages.
You can adjust the blocking mode of the read and write interface by adjusting the timeout of the read queue and write queue. If the timeout of the read queue and write queue is set to 0, the
task will not be suspended and the interface will return directly. This is non-blocking. model. On the contrary, if the timeout of both queue and write queue is set to a time greater than 0, it will run in
blocking mode.
Message queues provide an asynchronous processing mechanism that allows a message to be put into the queue but not processed immediately. At the same time, the queue also has the
function of buffering messages. You can use the queue to implement asynchronous communication of tasks. The queue has the following characteristics:
ÿMessages are queued in a first-in, first-out manner and support asynchronous reading and writing.
ÿBoth the read queue and the write queue support the timeout mechanism.
ÿEvery time a message is read, the message node will be set to idle.
ÿThe type of message sent is agreed upon by both communicating parties, and messages of different lengths (not exceeding the message node size of the queue) can be allowed.
interest.
ÿA task can receive and send messages from any message queue.
ÿMultiple tasks can receive and send messages from the same message queue.
ÿThe queue space required when creating a queue, the system in the interface dynamically applies for memory on its own.
ÿWhen creating a queue, the queue ID will be returned if the queue is successfully created.
ÿA message head node position Head and a message tail node position Tail are maintained in the queue control block, which are used to indicate the storage status of messages in the current
queue. Head represents the starting position of the occupied message node in the queue. Tail represents the end position of occupied message nodes and the starting position of idle
message nodes. When the queue is first created, both Head and Tail point to the starting position of the queue.
Machine Translated by Google
ÿWhen writing to the queue, judge whether the queue can be written according to readWriteableCnt[1], and cannot write to the queue if it is full.
(readWriteableCnt[1] is 0) queue for write operations. The write queue supports two writing methods: writing to the tail node of the queue, or writing to the head node of the
queue. When the tail node is written, the starting idle message node is found according to the Tail as the data writing object. If the Tail has pointed to the end of the queue, the
winding method is used. When the head node is written, the node preceding the Head is used as the data writing object. If the Head points to the starting position of the queue,
ÿWhen reading the queue, judge whether there are messages in the queue that need to be read based on readWriteableCnt[0], and all idle
(readWriteableCnt[0] is 0) Reading operations in the queue will cause the task to hang. If the queue can read the message, the message node that first writes to the
queue is found according to the Head and read. If Head already points to the end of the queue, the wraparound method is used.
ÿWhen deleting a queue, find the corresponding queue according to the queue ID, set the queue status to unused, and set the queue control block to initial
Figure 3-25 Schematic diagram of queue reading and writing data operations
The above figure illustrates the read and write queue. In the figure, only the writing method of the tail node is shown, and the writing method of the head node is not shown, but the
Read-write locks are similar to mutex locks and can be used to synchronize tasks in the same process, but unlike mutex locks, they allow
Multiple read operations are concurrent and reentrant, while write operations are mutually exclusive.
Relative to the unlocked or locked state of the mutex lock, the read-write lock has three states: lock in read mode, lock in write mode, and none.
Lock.
ÿThere is no lock in write mode in the protected area, and any task can add a lock in read mode to it.
ÿThe lock in write mode can be added only when the protected area is in a lock-free state.
In a multi-tasking environment, there are often application scenarios where multiple tasks access the same shared resource. The lock in read mode is in a shared state.
Access to protected areas, while locks in write mode can be used to protect shared resources to achieve exclusive access.
Machine Translated by Google
This shared-exclusive approach is very suitable for applications where the frequency of reading data in multitasking is much greater than the frequency of writing data, improving the multi-
Compared with mutex locks, how do read-write locks implement locks in read mode and locks in write mode to control multi-task read and write access?
Woolen cloth?
ÿIf task A acquires the lock in write mode for the first time, there will be no other tasks to acquire or try to acquire the lock in read mode.
ÿIf task A acquires the lock in read mode, when a task acquires or attempts to acquire the lock in read mode, the read-write lock count
lock logic to form a user mode lock. It is a kind of user mode and kernel mode that work together. Locks, such as user-mode mutex locks, barrier and cond synchronization locks, and read-write locks.
The user-mode part is responsible for lock logic, and the kernel-mode part is responsible for lock scheduling.
When a user-mode thread requests a lock, the lock status is first judged and maintained in the user-mode. If no competition for the lock occurs at this time, the lock is returned directly in the
user-mode. Otherwise, a thread suspension operation is required. The Futex system call requests kernel intervention to suspend threads and maintain blocking queues.
When the user state thread releases the lock, the lock status is first judged and maintained in the user state. If no other thread is blocked by the lock at this time, it will directly unlock and
return in the user state; otherwise, the blocked thread needs to be awakened. Request kernel intervention through the Futex system call to wake up the threads in the blocked queue.
When lock competition or release occurs in user mode and requires scheduling operations of related threads, the Futex system call will be triggered to enter the kernel. At this time, the
address of the user mode lock will be transferred to the kernel, and the lock address will be distinguished in the kernel's Futex. For each lock in user mode, because the available virtual address space
in user mode is 1GiB, in order to facilitate search and management, kernel Futex uses hash buckets to store incoming locks from user mode. There are currently 80 hash buckets. Buckets 0~63
are used to store private locks (hashed with virtual addresses), buckets 64~79 are used to store shared locks (hashed with physical addresses), private/shared attributes It is determined through
the initialization of the user mode lock and the input parameters of the Futex system call.
3.3.4.7 Signal
Signal is a commonly used asynchronous communication mechanism between processes. It uses software to simulate interrupt signals. When one process needs to transmit information
to another process, it will send a signal to the kernel, and then the kernel will pass the signal to Specify the process, and the specified process does not need to wait for the signal.
Time management is based on the system clock. Time management provides all time-related services to applications. system clock
It is generated by the output pulse generated by the timer/counter triggering an interrupt, and is generally defined as an integer or a long integer. The period of the output pulse is called a "clock
tick". The system clock is also called the time stamp or Tick. The duration of a Tick can be configured statically. The user measures time in seconds and milliseconds, while the operating system
clock measures time in Tick. When the user needs to operate the system, such as task suspension, delay, etc., he or she enters a value in seconds, which takes time. The management module
Cycle: The smallest timing unit of the system. The length of Cycle is determined by the system frequency, which is the number of Cycles per
second.
Tick: Tick is the basic time unit of the operating system. The corresponding duration is determined by the system frequency and the number of ticks
per second, and is configured by the user. The system's time management module provides time conversion, statistics, and delay functions to meet users' time-
related needs.
defined callback function will be triggered. Timing accuracy is related to the period of the system Tick clock. Hardware timers are limited by hardware, and the
number is not enough to meet the actual needs of users. Therefore, in order to provide more timers to meet user needs, the Huawei LiteOS operating system
provides software timer functions. Software timers expand the number of timers, allowing the creation of more scheduled services.
Static cropping: Software timer function can be turned off through macro.
Running mechanism software timers are system resources. A piece of continuous memory has been allocated during module initialization. The maximum
number of timers supported by the system is configured by the LOSCFG_BASE_CORE_SWTMR_LIMIT macro in los_config.h. The software timer uses a queue
and a task resource of the system. The triggering of the software timer follows the queue rules, first in, first out. A timer with a shorter timing set at the same
time is always closer to the head of the queue than a timer with a longer timing, meeting the criterion of being triggered first. Software timers use Tick as the
basic timing unit. When the user creates and starts a software timer, the system will determine the expiration Tick time of the timer based on the current system
Tick time and the time interval set by the user, and control the timer. The structure is hooked into the timing global linked list.
When the Tick interrupt arrives, scan the timing global linked list of the software timer in the Tick interrupt processing function to see if there is
The timer times out, if any, the timer that times out will be recorded.
After the Tick interrupt processing function ends, the software timer task (with the highest priority) is awakened, and is called in this task
Timer status:
ÿ OS_SWTMR_STATUS_UNUSED (unused)
The system initializes all timer resources in the system to this state when the timer module is initialized.
The timer will change to this state after calling the LOS_SwtmrCreate interface in the unused state or calling the LOS_SwtmrStop interface after
startup.
ÿ OS_SWTMR_STATUS_TICKING (count)
Machine Translated by Google
After the timer is created and the LOS_SwtmrStart interface is called, the timer will become this state, indicating the state when the timer is running.
Timer mode:
ÿThe first type is a single-trigger timer. This type of timer will only trigger a timer event once after it is started, and then the timer will automatically
Delete automatically.
ÿThe second type is periodic trigger timer. This type of timer will trigger timer events periodically until the user manually stops the timer.
ÿThe third type is also a single-trigger timer, but the difference from the first type is that this type of timer will not be automatically deleted after timeout. You need to
In an operating system that supports multitasking, modifying data in a memory area requires three steps: "read-modify-write". However, data in the same
memory area may be accessed by multiple tasks at the same time. If the data is interrupted by other tasks during the modification process, the execution result of
Using the switch interrupt method can certainly ensure that the multi-task execution results are as expected, but obviously this method will affect system
performance.
The ARMv6 architecture introduced the LDREX and STREX instructions to support more sophisticated non-blocking synchronization of shared memory.
The atomic operation achieved in this way can ensure that the "read-modify-write" operation on the same data will not be interrupted during its execution, that is,
File System (or FS for short) is a main form of input and output in the operating system and is mainly responsible for interacting with internal and external
storage devices. The file system interfaces with the POSIX standard operation interface provided by the C library. For details, please refer to the API documentation
of the C library. In contrast, through the VFS virtual layer in the kernel state, the differences between specific file systems are shielded. The basic architecture is as
follows:
Machine Translated by Google
systems, providing users with a unified Unix-like file operation interface. Since the interfaces of different types of file systems are not uniform, if there are
multiple file system types in the system, different non-standard interfaces need to be used to access different file systems. By adding the VFS layer to the system,
a unified abstract interface is provided, which shields the differences between the underlying heterogeneous file systems, so that system calls to access the
file system do not need to care about the underlying storage media and file system types, improving development efficiency.
The VFS framework is implemented through a tree structure in memory. Each node of the tree is a Vnode structure, and the parent
The relationship between child nodes is stored in the PathCache structure. The two main functions of VFS are: finding nodes and unified calling (standard).
Currently, the VFS layer mainly uses function pointers to implement standard interface functions by calling different interfaces for different file system
types; improves the performance of path search and file access through the Vnode and PathCache mechanisms; partition management through mount point
management; and FD management. Perform inter-process FD isolation, etc. These mechanisms are briefly described below.
1. File system operation function pointers: The VFS layer uses function pointers to distribute unified calls to different file systems according to different
file system types for underlying operations. Each file system implements a set of Vnode operations, mount point operations, and file operation interfaces, and
stores them in the corresponding Vnode, mount point, and File structures in the form of function pointer structures to implement VFS layer access.
2. Vnode: Vnode is an abstract encapsulation of specific files or directories at the VFS layer, which shields the differences between different file systems.
differences to achieve unified management of resources. Vnode nodes mainly have the following types:
ÿMount point: mount a specific file system, such as /, /storage ÿDevice node: a
ÿFile /directory node: corresponds to the file/directory in the specific file system, such as /bin/init
Vnode is managed through hashing and LRU mechanisms. When the system starts, access to files or directories will be prioritized from
Search the Vnode cache in the hash linked list. If the cache does not hit, search for the target file or directory from the corresponding file system, create
and cache the corresponding Vnode. When the number of Vnode caches reaches the upper limit, Vnodes that have not been accessed for a long time
will be eliminated. The mount point Vnode and the device node Vnode do not participate in the elimination. The default specification of Vnode in the current
system is 512, which can be configured through LOSCFG_MAX_VNODE_SIZE. If the number of Vnodes is too large, it will cause a large memory footprint; if
the number of Vnodes is too small, the search performance will decrease.
Machine Translated by Google
3. PathCache is a path cache, corresponding to Vnode. PathCache is also stored through a hash linked list, through
The PathCache cached in the parent Vnode can quickly obtain the child Vnode and speed up path search.
Machine Translated by Google
4. PageCache is a file-level kernel cache. Currently, PageCache only supports binary file operations. When the file is accessed for the first time,
it is mapped into the memory through mmap, which reduces the kernel memory usage and greatly improves the speed of reading and writing the same
5. Fd management: Fd (File Description) is a descriptor that describes an open file/directory. Current kernel
Among them, the total specification of fd is 896, which is divided into three types:
In the current LiteOS-A kernel, fds in different processes are isolated, that is, a process can only access the fd of this process, so
The fd of a process is mapped to the global fd table for unified allocation and management. A process has a maximum of 256 file descriptors.
6. Mount point management: In the current kernel, all mount points in the system are managed uniformly through linked lists. Mount point structure
In the body, all Vnodes in the mounted partition are recorded. When a partition is unmounted, all Vnodes in the partition will be released.
3.3.8.2.1 FAT
FAT file system is the abbreviation of File Allocation Table, which mainly includes three areas: DBR area, FAT area, and DATA area. Among
them, each entry in the FAT area records the information of the corresponding cluster in the storage device, including whether the cluster is used, the
number of the next cluster in the file, whether the file ends, etc. The FAT file system has multiple formats such as FAT12, FAT16, and FAT32. Among
them, 12, 16, and 32 represent the number of bytes of the FAT table entry in the corresponding format. They also limit the maximum file size in the file
system. The FAT file system supports a variety of media, especially removable storage media (U disk, SD card,
Machine Translated by Google
It is widely used on mobile hard disks, etc.) to maintain good compatibility between embedded devices and desktop systems such as Windows and Linux, making
The kernel supports FAT file systems in three formats: FAT12, FAT16 and FAT32, with small code size and low resource consumption.
It is small, can be cut, supports a variety of physical media and other features, and is compatible with Windows, Linux and other systems, and supports functions such
as multi-device and multi-partition recognition. Supports multiple hard disk partitions, and can create FAT file systems on primary partitions and logical partitions.
The LiteOS-A kernel improves FAT file system performance through Bcache, which is the abbreviation of block cache. When reading and writing occur, Bcache
will cache sectors near the reading and writing sectors to reduce the number of I/Os and improve performance. The basic cache unit of Bcache is block, and each block
has the same size (default is 28 blocks, each block caches 64 sectors of data). When the Bcache dirty block rate (number of dirty sectors/total number of sectors)
reaches the threshold, writeback is triggered; if the dirty block rate does not reach the threshold, the cached data will not be written back to the disk. If data writeback
needs to be guaranteed, developers should call sync and fsync to trigger writeback. Some interfaces of the FAT file system will also trigger writeback operations (such
as close, umount, etc.), but developers should not trigger writeback based on these interfaces.
3.3.8.2.2 JFFS2
JFFS2 is the abbreviation of Journalling Flash File System Version 2 (Journaling File System), which is for
The JFFS2 of the kernel is mainly used in NOR FLASH flash memory. Its characteristics are: readable and writable, supports data compression, and provides
Crash/power-off safety protection, "write balancing" support, etc. There are many differences between flash memory and disk media. Running a disk file system
directly on a flash memory device will lead to performance and security issues. To solve this problem, a file system specifically targeted at flash memory needs to be
Here are just a few important mechanisms/features of JFFS2 that will have a certain impact on developers and users:
1. Mount mechanism and speed issues: According to the design of JFFS2, all files will be divided into
Nodes of different sizes are stored on the flash device in sequence. During the mount process, all node information needs to be obtained and cached in memory.
Therefore, the mount speed is linearly proportional to the size of the flash device and the number of files. This is a native design problem of JFFS2. Users who are very
concerned about the mount speed can turn on the "Enable JFFS2 SUMMARY" option during kernel compilation, which can greatly improve the mount speed. The
principle of this option is to store the information required for mount in flash in advance, and read and parse this content during mount, so that the mount speed becomes
relatively constant. This is actually a method of exchanging space for time, which will consume about 8% of extra space.
2. Support for write balancing: Due to the physical attributes of the flash device, reading and writing can only be performed based on "blocks" of a certain
size. In order to prevent certain blocks from being worn out too seriously, JFFS2 needs to balance written blocks. Carry out "balanced" management to ensure that the
number of writes to all blocks is relatively even, thereby ensuring the overall life of the flash device.
3. GC (garbage collection) mechanism: When a deletion occurs in JFFS2, the actual physical space will not be released immediately. Instead, an independent
GC thread will perform GC actions such as space sorting and relocation, just like all GC mechanisms. The GC in JFFS2 will have a certain impact on instantaneous
read and write performance. In addition, in order to have space that can be used for space organization, JFFS2 will reserve about 3 blocks of space for each
4. Compression mechanism: The bottom layer of the currently used JFFS2 will automatically decompress/compress every time it is read/written. The actual
IO size will not be the same as the size requested by the user to read or write. Especially when writing, you cannot predict whether the writing will succeed or fail based
on the writing size and the size of the remaining flash space.
Machine Translated by Google
5. Hard link mechanism: JFFS2 supports hard links. The actual physical space occupied by the bottom layer is one. Multiple hard links to the same
file will not increase the space occupied; on the contrary, only when all hard links are deleted , the actual physical space will be released.
3.3.8.2.3 NFS
NFS is the abbreviation of Network File System. Its biggest function is that it allows different machines and different operating systems to share
other users' files with each other through the network. Therefore, users can simply regard it as a file system service, which is equivalent to a shared folder in a
The NFS file system of the LiteOS-A kernel refers to the NFS client. The NFS client can mount the directory shared by the remote NFS server to
the local machine, run programs and share files, but does not occupy the storage of the current system. Space, from the local machine's perspective, the
3.3.8.2.4 RAMFS
RAMFS is a dynamically resizable RAM-based file system. RAMFS has no backing storage source. File write operations to RAMFS will also
allocate directory entries and page cache, but the data is not written back to any other storage media, and the data will be lost after power failure.
The RAMFS file system places all files in RAM, so read/write operations occur in RAM. RAMFS can be used to store some temporary or frequently
modified data, such as /tmp and /var directories, which avoids The loss of reading and writing to the memory also increases the speed of data reading and
writing.
3.3.8.2.5 Procfs
procfs is the abbreviation of process file system. It is a virtual file system that displays processes or other files in the form of files.
system message. Compared with obtaining information by calling interfaces, it is more convenient to obtain system information by file operations.
In the LiteOS-A kernel, procfs is automatically mounted to the /proc directory when booting, and only the kernel module is supported to create file
4 Driver Basics
These resources need to be taken over by hardware, and the role of the driver is to enable the hardware device to operate normally.
HarmonyOS is oriented towards the era of the Internet of Everything, and the Internet of Everything involves a large number of hardware devices, and these hardware
are highly discrete, and their performance and configuration differences are very different. Therefore, this requires the use of a more flexible, more powerful, energy-
The full name of driver is device driver. It is a special program added to the operating system, which contains information about hardware devices. The driver
enables the computer to communicate with the corresponding device. Different operating systems have different hardware drivers. .
On traditional Linux operating systems, device drivers can be roughly divided into three categories based on the characteristics of device read and write
operations: Character devices are also called "byte devices", and their software operates in bytes when operating the device. For example, LCD monitors and LEDs;
the device block size of block devices is defined when the device itself is designed and cannot be changed by software, such as various storage devices such as hard
drives and SD cards; network devices are driver models specially designed for network cards and are mainly used To support socket related functions in API, such
driver loading, driver service management and driver message mechanism management. It aims to build a unified driver architecture platform to provide driver
developers with a more accurate and efficient development environment, and strives to achieve one-time development and multi-system deployment.
HDF driver loading includes on-demand loading and sequential loading. On-demand loading means that the HDF framework supports the driver to be loaded
by default during system startup or dynamically loaded after system startup; sequential loading means that the HDF framework supports the driver to be loaded
according to the driver's priority during system startup. HDF driver service management can centrally manage driver services, and developers can directly obtain
driver-related services through the capability interface provided by the HDF framework. The HDF framework provides a unified driver message mechanism that
supports user-mode applications to send messages to kernel-mode drivers, and also supports kernel-mode drivers to send messages to user-mode applications.
The HDF driver framework has the following features. Flexible framework, componentized driver model, standardized driver platform, unified
Unified platform base, standardized configuration interface, and dynamic installation of drivers.
The HDF framework takes the componentized driver model as the core design idea to provide developers with more refined driver management and make
driver development and deployment more standardized. The HDF framework places a type of device driver in the same host. Developers can also independently
develop and deploy driver functions in layers, supporting one driver for multiple nodes. The HDF framework management driver model is shown in the figure below.
Machine Translated by Google
The configuration file of the HDF driver framework contains driver device description and driver private information. The driver defined by the HDF framework is configured on demand.
The loading mode strategy is controlled by the preload field in the configuration file. The value range and meaning of the preload field are as follows:
Down:
When the system supports quick start, add it after the system is completed.
PRELOAD 1
Load this type of driver, otherwise it has the same meaning as 0
Loading, supports subsequent dynamic loading, when the user mode obtains the driver service
PRELOAD 2
When, if the driver service does not exist, the HDF framework will try to dynamically add
The sequential loading of the driver is determined by the priority in the configuration file (the value range is an integer from 0 to 200).
The smaller the priority value, the higher the priority. The loading order of the driver is determined based on the priority of the host. If
The priority of the host is the same, and the loading order is determined according to the priority value of the driver in the host.
Driver services are objects that HDF driver devices provide external capabilities and are managed uniformly by the HDF framework. Driver service manager
To include the publishing and retrieval of driver services. The HDF framework defines the strategy for driving external publishing services, which is determined by the configuration file.
To control the policy field, the value range and meaning of the policy field are as follows:
Machine Translated by Google
POLICY 2 SERVICE_POLICY_CAPACITY, the driver publishes services to both kernel mode and user mode
POLICY SERVICE_POLICY_FRIENDLY, the driver service does not publish services to the outside world, but can be
3
subscription
POLICY SERVICE_POLICY_PRIVATE, the driver private service does not publish services to the outside world, nor does it
4
can be subscribed
When user-mode applications and kernel-mode drivers need to interact, the message mechanism of the HDF framework can be used to achieve this. HDF clear
There are two main functions of the message mechanism: one is for user-space applications to send messages to the driver, and the other is for user-space applications to receive messages from the driver.
Proactively report incidents. The model diagram of the HDF message mechanism is as follows:
Driver development work based on the HDF framework is mainly divided into two main parts: driver implementation and driver configuration. drive
The specific steps of development are as follows: In the driver implementation stage, it is necessary to complete the development of the driver business code and the registration of the driver portal. exist
After completing the driver implementation, perform driver compilation. Driver compilation must use the Makefile template to compile, and then compile the result file.
The file is linked to the kernel image. After completing the driver compilation, you can configure the driver. During the driver configuration stage, you need to modify the driver.
According to the above driver development process, the driver business code development is implemented in sequence:
#define HDF_LOG_TAG "sample_driver" //Print the tag contained in the log. If not defined, the default definition will be used.
HDF_TAG tag //
The driver provides external service capabilities and binds the relevant service
// The object that defines the driver entry must be a global variable of type HdfDriverEntry (defined in hdf_device_desc.h) struct
"sample_driver", .Bind =
HdfSampleDriverBind, .Init =
HdfSampleDriverInit, .Release =
HdfSampleDriverRelease,
}; // Call HDF_INIT to register the driver entry into the HDF framework. When loading the driver, the HDF framework will first call the Bind function, and then call the Init
function to load the driver. When the Init call exception occurs, the HDF framework will call Release to release the driver resources and quit.
HDF_INIT(g_sampleDriverEntry);
Then use the Makefile template to complete the compilation, and then link the compiled result file to the kernel image. The information required by the HDF framework to load the driver comes
from the driver device description defined by the HDF framework. Therefore, the driver developed based on the HDF framework must add the corresponding device description in the device_info.hcs
configuration file defined by the HDF framework. The driver's device description is filled in as follows :
sample_host :: host{
hostName = "host0"; // host name, the host node is a container used to store a certain type of driver
default, priority = 100; if // Host startup priority (0-200), the larger the value, the lower the priority. It is recommended to configure 100 by
the priority is the same, the loading order of the host is not guaranteed.
priority = 100; If the Driver startup priority (0-200), the larger the value, the lower the priority. It is recommended to set 100 by default ,priority
0664; moduleName = "sample_driver"; // Driver name, the value of this field must be the same as the driver entry structure
moduleName value is consistent
serviceName = "sample_service"; // The name of the driver's external publishing service must be unique
deviceMatchAttr = "sample_config"; // The keyword matching the driver private data must be the same as the driver private data
There are equal match_attr values in the data configuration table
If the driver has private configuration, you can add a driver configuration file to fill in some of the driver's default configuration information. When the HDF framework loads
the driver, it will obtain and save the corresponding configuration information in the property in HdfDeviceObject, through Bind and Init are passed to the driver. Examples of driver
root
{ SampleDriverConfig
{ sample_version = 1;
sample_bus = "I2C_0";
match_attr = "sample_config"; //The value of this field must be the same as the deviceMatchAttr value in device_info.hcs
To
After the configuration information is defined, the configuration file needs to be added to the board-level configuration entry file hdf.hcs. The example is as follows
"sample/sample_config.hcs"
HCS (full name HDF Configuration Source) is the configuration description source code of the HDF driver framework, and the content is mainly in the form of Key-
Value pairs. It decouples configuration code from driver code, making it easier for developers to perform configuration management.
HC-GEN (full name HDF Configuration Generator) is an HCS configuration conversion tool that can convert HDF configuration files into file formats readable by software: in
weak performance environments, it is converted into configuration tree source code, and the driver can directly call C code to obtain the configuration. ; In a high-performance
environment, it is converted into an HCB (HDF Configuration Binary) binary file, and the driver can use the configuration parsing interface provided by the HDF framework to obtain
the configuration.
HCS is compiled by HC-GEN to generate an HCB file. The HCS Parser module in the HDF driver framework will reconstruct the configuration tree from the HCB file.
The HDF driver module uses the configuration reading interface provided by HCS Parser to obtain the configuration content. The schematic diagram of the driver configuration
process is as follows:
Machine Translated by Google
IOT (The Internet of Things) interface for hardware device operation. IoT proprietary hardware service subsystem provides device operation
The operating interfaces include: FLASH, GPIO, I2C, PWM, UART, WATCHDOG, etc. The directory structure is as follows.
/base/iot_hardware/peripheral ÿÿÿ
interfaces ÿÿÿ
kits #IOT device operation interface, external interface storage directory
4.4.1 GPIO
GPIO (full name General-purpose input/output) is general-purpose input and output. Usually, the GPIO controller manages all GPIO pins in
groups. Each group of GPIO has one or more registers associated with it, and operations on the GPIO pins are completed by reading and writing
registers.
GPIO is a pin on the chip that can perform multiple functions. Users can interact with hardware through the GPIO port.
(such as UART), control the work of hardware (such as LED, buzzer, etc.), read the working status signal of the hardware (such as interrupt
signal), etc.
The GPIO interface defines a set of standard methods for operating GPIO pins, including:
Set the pin direction: The direction can be input or output (high impedance state is not supported yet);
Read and write pin level value: The level value can be low level or high level;
Set the pin interrupt service function: set the interrupt response function of a pin and the interrupt triggering method;
GPIO standard API operates specified pins through GPIO pin numbers. The general process of using GPIO is as shown in the figure above.
You can set the pin direction, read and write the pin level or enable the pin interrupt.
To set the pin direction, use the function int32_t GpioSetDir(uint16_t gpio, uint16_t dir). The parameters are to be
The set GPIO pin number and the direction value to be set. The direction value has three values. The direction value has three values, namely
GPIO_DIR_ERR (indicates Invalid direction), if the return value of this function is 0, it means the setting is successful, if the return value is less than 0,
To read the level of the GPIO pin, you can use the function int32_t GpioRead(uint16_t gpio, uint16_t *val). Its two parameters respectively
represent the GPIO pin number to be read and the pointer to receive the read level. The pointers for reading levels have three values, namely
GPIO_VAL_LOW (indicating Low GPIO level), GPIO_VAL_HIGH (indicating High GPIO level), and GPIO_VAL_ERR (indicating Invalid GPIO level).
If the return value of this function is 0, it means the reading is successful. , if the return value is less than 0, it means the reading failed.
Machine Translated by Google
If you want to write a level value to a GPIO pin, you can use the function int32_t GpioWrite(uint16_t gpio, uint16_t val). Its two parameters represent the GPIO
pin number to be read and the pointer to receive the read level. The pointers for reading levels have three values, namely GPIO_VAL_LOW (indicating Low GPIO level),
GPIO_VAL_HIGH (indicates High GPIO level), GPIO_VAL_ERR (indicates Invalid GPIO level). If the return value of this function is 0, it means the writing is successful. If the
4.4.2 UART
UART (full name: Universal Asynchronous Receiver/Transmitter) is a universal asynchronous receiver/transmitter. It is a universal serial data bus used for asynchronous
communication. The bus communicates in two directions and can achieve full-duplex transmission. UART is widely used and is often used to output printing information. It can also
The connection diagram of two UART devices is as follows. UART and other modules are generally connected with 2 or 4 lines. They are respectively
yes:
RTS: Sends a request signal to indicate whether the device is ready, can accept data, and is connected to the opposite end CTS;
CTS: Allow to send signal, used to determine whether data can be sent to the opposite end, and connected to the opposite end RTS.
```
Before UART communication, both the sending and receiving parties need to agree on some parameters: baud rate, data format (start bit, data bit, check bit, stop bit), etc.
During the communication process, UART sends data to the opposite end through TX and receives data sent by the opposite end through RX. When the UART receive buffer
reaches the predetermined threshold, the RTS becomes unavailable to send data, and the peer's CTS detects that the data cannot be sent and stops sending data.
Machine Translated by Google
The UART interface defines a set of common methods for operating UART ports, including obtaining and releasing device handles, reading and writing
data, obtaining and setting baud rate, and obtaining and setting device attributes.
The byte sequence of UART during data transmission is shown in the figure below. The starting bit is 0, the character data is 7 bits, followed by the check
bit and the last stop bit. After the two character self-test, there will be an idle bit.
4.4.3 I2C
The I2C (full name: Inter Integrated Circuit) bus is a simple, bidirectional two-wire synchronous serial bus developed by Philips Company. I2C works in a
master-slave mode. There is usually a master device and one or more slave devices. The master and slave devices are connected through the SDA (SerialData)
serial data line and the SCL (SerialClock) serial clock line.
The transmission of I2C data must use a start signal as the start condition and an end signal as the stop condition of the transmission. Data
transmission is in units of bytes, with the high-order bit first, and is transmitted bit by bit. Each device on the I2C bus can be used as a master device or a slave
device, and each device will correspond to a unique address. When the master device needs to communicate with a slave device, it writes the slave device address
to the bus through broadcast. On, if a slave device matches this address, an acknowledgment signal will be sent to establish the transfer.
The I2C interface defines a common set of methods to complete I2C transmission, including: I2C controller management for turning on or off
I2C controller; I2C message transmission can be customized through the message transmission structure array.
I2C timing mainly consists of four elements: start signal, stop signal, response (0), non-response (1).
Machine Translated by Google
The start signal is that the SDA data line is at the falling edge and the SCL clock line is at a high level; the stop signal is that the SDA is at
On the rising edge, SCL is in a high-level state; SDA is in a low-level state, and SCL is in a high-level state to indicate a response signal.
ACK confirmation; SDA is in a high level state, and SCL is in a high level state, indicating that the non-response signal NACK is not acknowledged.
The transmission of I2C data must use a start signal as the start condition and an end signal as the stop condition of the transmission.
pieces. Data transmission is in units of bytes, with the high-order bit first, and is transmitted bit by bit. For specific data transmission conditions, please refer to the following
picture:
Number of SCLs SDA SDA is operated by the host SDA is operated by the slave
4.4.4 SPI
SPI is the abbreviation of Serial Peripheral Interface (full name: Serial Peripheral Interface). It is a high-speed, full-dual
SPI was developed by Motorola and is used to communicate between master devices and slave devices. It is often used with flash memory, real-time
As shown in the connection diagram of a master device and two slave devices, slave device A and slave device B share the SCLK and slave devices of the master device.
There are three pins, MISO and MOSI. The chip select CS0 of slave device A is connected to the CS0 of the master device, and the chip select CS1 of slave device B is connected to
the CS1 of the master device. The SPI master-slave device connection is shown in the figure below:
4.4.5 SDIO
SDIO is the abbreviation of Secure Digital Input and Output. It is a peripheral interface evolved from the SD memory card interface. The SDIO interface is compatible with
previous SD memory cards and can be connected to devices that support the SDIO interface.
SDIO is often used in the development of peripherals for mobile devices such as mobile phones, making it easier to connect external peripherals to the device.
Common SDIO peripherals include WLAN devices, GPS, CAMERA, and Bluetooth. The HOST-DEVICE connection of SDIO is shown in the figure below:
4.4.6 RTC
RTC (full name real-time clock) is a real-time clock device in the operating system, providing accurate real-time
Time and scheduled alarm function. When the device is powered off and powered by an external battery, the RTC continues to record the operating system time; after the device is
powered on, the RTC provides a real-time clock to the operating system to ensure the continuity of system time after a power outage.
Taking the STM32F103 chip as an example, it has an independently powered real-time clock and a separate oscillation circuit inside, eliminating the need for an external clock
chip.
Machine Translated by Google
During the startup process of the operating system, the driver management module loads the RTC driver according to the configuration file, and the RTC driver will detect
RTC device and initialize the driver. The usage process of RTC equipment is shown in the figure below:
4.4.7 WATCHDOG
Watchdog, also called watchdog timer (full name watchdog timer), is a hardware timing device. When the system
When some errors occur in the main program, resulting in the watchdog timer not being cleared in time, the watchdog timer will
The system sends a reset signal to restore the system from the hover state to normal operation.
The watchdog can be regarded as a special timer, but when the timer expires, it not only generates an interrupt, but also
Reset the CPU. The usage principle of the watchdog timer is shown in the figure below. When an error occurs in the main program, the watchdog timer
It plays its role and returns the system to normal through the reset signal.
The process of using the watchdog has the following steps: first open the watchdog timer, set the timeout, start the watchdog, and set the timeout.
Feed the dog regularly (to ensure that the watchdog timer survives), and you can turn off the watchdog device after completion of use.
4.4.8 ADC
ADC (full name Analog to Digital Converter) analog-to-digital converter. All properties in real life (such as temperature
temperature, humidity, light intensity, etc.) are continuous, that is, analog signals; and the signals that can be recognized by microcontrollers or electronic computers
They are all discrete digital signals. At this time, if you need to use various properties in the real world, you need a device to simulate the information
Analog-to-digital conversion generally goes through the steps of sampling, quantization and encoding.
For example, in the scenario of analog voltmeter, you can use such ADC for analog-to-digital conversion:
00 0V
01 1.1V
10 2.2V
11 3.3V
4.4.9 PWM
PWM (full name: Pulse Width Modulation) is also called pulse width modulation. It modulates the width of a series of pulses.
modulate the required waveform (including shape and amplitude), digitally encode the analog signal level, and
Machine Translated by Google
That is to say, changes in signals, energy, etc. are adjusted by adjusting changes in duty cycle. The duty cycle refers to the percentage of the
entire signal cycle that the signal is at high level in a cycle. The specific calculation formula of the duty cycle is: duty cycle = high level time/
cycle time ÿ 100%, for example, square The duty cycle of the wave is 50%, as shown in the figure below.
Machine Translated by Google
5 Subsystem development
HarmonyOS divides some commonly used functions into independent subsystems to better implement related functions. These
Subsystems include: compilation and construction subsystem, distributed remote start subsystem, public basic subsystem, OTA upgrade subsystem
system, startup recovery subsystem, soft bus subsystem, graphics and image subsystem, media subsystem, AI framework subsystem,
Sensor service framework and user service framework, security subsystem, test subsystem, DFX and XTS.
The compilation and build subsystem is the most basic subsystem. It is based on the gn and ninja build systems to support
HarmonyOS aims at component-based development and provides the following basic functions: supports assembling products by components and compiling, independent
Build the source code of chip solution manufacturers independently and build individual components independently.
Before using compilation to build a subsystem, you should have an understanding of the following basic concepts.
A subsystem is a logical concept that consists of one or more specific components. HarmonyOS as a whole complies with the layered design
The design, from bottom to top, is: kernel layer, system service layer, framework layer and application layer. System functions follow "System > Subsystem
System > Components" is expanded step by step. In a multi-device deployment scenario, some non-essential subsystems or groups are supported according to actual needs.
pieces.
A component is the smallest reusable, configurable, and tailorable functional unit of the system. Components have directory independence and can be developed in parallel.
gn is the abbreviation of Generate ninja, which is used to generate ninja files. Ninja is a small build focused on speed.
system. hb is the command line tool of HarmonyOS, used to execute compilation commands.
The compile and build subsystem has a directory structure like this.
build/lite
# ld script
Machine Translated by Google
toolchain # Compilation tool chain configuration, including: compiler path, compilation options, link options, etc.
The main process of compilation and construction is shown in the figure below. The main steps are the setting and compilation process.
During the compilation process, there are two most critical operations, namely hb set to set the HarmonyOS source code directory and the product to be compiled; after
completing the settings, hb build is used to compile the product, development board or component. The main process of compilation is as follows: first, read the compilation configuration,
and read the content of the development board config.gni file according to the development board selected by the product, which mainly includes the compilation tool chain, compilation link
commands and options, etc. Then call the gn gen command to read the product configuration and generate the product solution out directory and ninja file. Then call ninja -C out/board/
product to start compilation. The last step is to package the system image, package the component compilation products, set file attributes and permissions, and create a file system
image. After compilation, the files are located in the project out directory and can be used for burning.
In order to realize that chip solutions, product solutions and HarmonyOS are decoupled and pluggable, the paths, directory trees and configurations of components, chip
solutions and product solutions need to follow certain rules. The component source code path naming rule is: {domain}/{subsystem}/{component}, and the component directory tree rules
are as follows:
component
Component name, source code path, function introduction, whether it is required, compilation target, RAM, ROM, compilation output, and
Properties such as adapted kernel, configurable features and dependencies are defined in the corresponding subsystem in the build/lite/components directory.
json file, when adding a component, you need to add the corresponding component definition in the corresponding subsystem json file. product configured
The component must have been defined in a certain subsystem, otherwise the verification will fail.
Taking the sensor service component of the pan-sensor subsystem as an example, the field description of the component attribute definition description file is as follows:
{
"components": [
{
"component": "sensor_lite", # component name
"optional": "true", # Whether the component is required for the minimum system
"base/sensors/sensor_lite"
],
"targets": [ #Component compilation entry
"//base/sensors/sensor_lite/services:sensor_service"
],
"rom": "92KB", # Component ROM value
"deps":
{ "components": [ # Other components that the component depends on
"samgr_lite",
"ipc_lite"
],
"third_party": [ # The third-party open source software that the component depends on
"bounds_checking_function"
]
}
}
]
}
The compilation script language of the component is gn, and the component is the compilation target defined by gn, which can be a static library, a dynamic library, or a programmable library.
Execute file or group. The writing suggestions for component BUILD.gn are as follows:
2) The component's externally configurable feature variables need to be declared in the component's BUILD.gn. The feature variable
naming rule is: ohos_{subsystem}_{component}_{feature}. Features also need to be defined synchronously in the component description and configured
declare_args()
{ enable_ohos_graphic_ui_animator = false # Animation
feature switch ohos_ohos_graphic_ui_font = "vector" # Configurable font type, vector or bitmap
}
shared_library("base") {
sources = [
...
] include_dirs = [
...
]
}
if(enable_ohos_graphic_ui_animator )
{ shared_library("animator")
{ sources = [
...
] include_dirs = [
...
] deps = [ :base ]
}
}
...
# The target name is recommended to be consistent with the component name. The component target type can be executable (bin file), shared_library (dynamic
if(enable_ohos_graphic_ui_animator ) {
deps += [
"animator"
Machine Translated by Google
A chip solution refers to a complete solution based on a certain development board, including drivers, device-side interface adaptation, and development boards.
sdk etc. The chip solution is a special component. The source code path rule is: device/{chip solution manufacturer}/{open
hair board}. Chip solution components are compiled by default with the development board selected for the product. The chip solution directory tree rules are as follows:
device
ÿÿÿ
company ÿÿÿ # Chip solution manufacturer
ÿÿÿ
liteos_a # Optional, liteos kernel version
ÿÿÿ
config.gni # liteos_a version compilation configuration
config.gni compiles related configurations for the development board. The parameters in this configuration file will be used to compile all OS groups during compilation.
The file is globally visible to the system during the compilation phase. The key fields are introduced as follows:
kernel_type: The kernel type used by the development board, for example: "liteos_a", "liteos_m", "linux".
kernel_version: The kernel version used for development, for example: "4.19".
board_toolchain: The custom compilation tool chain name of the development board, for example: "gcc-arm-none-eabi". If empty, use
Default is ohos-clang.
board_toolchain_type: Compilation toolchain type, currently supports gcc and clang. For example: "gcc", "clang".
The product solution is a complete product based on the development board, which mainly includes the adaptation of the product to the OS, component assembly and configuration, startup
Automatic configuration and file system configuration, etc. The source code path rule is: vendor/{product solution vendor}/{product name}. product
The solution is also a special component, and the directory tree rules are as follows:
vendor
ÿÿÿ
company # Product solution manufacturer
init_configs
ÿ ÿ ÿÿÿ etc ÿ ÿ ÿ ÿÿÿ hals ÿ # init process startup configuration (optional, only required by the linux kernel)
ÿÿÿ
ÿÿÿ init.cfg # System service startup configuration
ÿÿÿ ...
Machine Translated by Google
Newly added products must create directories and files according to the above rules, and the compilation and build system will scan configured products according to these rules.
vendor/company/product/init_configs/etc, this folder contains the rcS script, Sxxx script and fstab script. The init process executes these
scripts before starting the system service. The execution process is "rcS->fstab->S00-xxx". The content in the Sxxx script is related to the needs of
the development board and product, and mainly includes the creation of device nodes, creation of directories, scanning of device nodes, modification
of file permissions, etc. These files are copied to the product out directory on demand in the BUILD.gn compiled by the product, and finally packaged into
vendor/company/product/init_configs/init.cfg, the configuration file for the init process to start the service. The currently supported parsing
commands are:
{
"jobs" : [{ # job array, one job corresponds to a command set. Job execution order: pre-init -> init -> post-init.
"name" : "pre-init",
"cmds" :
[ "mkdir /storage/data", # Create a directory
"chmod 0755 /storage/data", # Modify permissions. The format of permission value is 0xxx, such as 0755
"mkdir /storage/data/log",
"chmod 0755 /storage/data/log",
"chown 4 4 /storage/data/log", # Modify the group, the first number is uid, the second number is gid
...
"mount vfat /dev/mmcblock0 /sdcard rw,umask=000" # Mount, the format is: mount
[filesystem type] [source] [target] [flags] [data]
# Among them, flags only support: nodev, noexec, nosuid and rdonly
]
}, {
"name" : "init",
"cmds" : [ # Start the startup service in cmds array order
"start shell", # Note: There is exactly one space between start and service name.
...
"start service1"
]
}, {
"name" : "post-init", # The last job to be executed, the processing after the init process is started (such as after driver initialization)
mount device)
"cmds" : []
}
],
Machine Translated by Google
"path" : ["/sbin/getty", "-n", "-l", "/bin/sh", "-L", "115200", "ttyS000", "vt100"], # Executable
The full path of the line file, path must be the first element
"uid" : 0, # The uid of the process must be consistent with the uid of the binary file
"gid" : 0, # The gid of the process must be consistent with the gid of the binary file
"once" : 0, # Is it a one-time process? 1: After the process exits, init will not be re-pulled.
"importance" : 0, restart # Is it a key process? 1: It is a key process. If the process exits, init will
the board. 0: Non-critical process. If the process exits, init will not restart the board.
"caps" : [4294967295]
},
...
]
}
For the interface, please see the readme documentation of each component.
vendor/company/product/config.json, config.json is the main entrance for compilation and build, including development
Configuration information such as board, OS components, and kernel. Taking the ipcamera product based on hispark_taurus development board as an example, configure
{
"product_name": "ipcamera", # product name
"subsystems": [
{
"subsystem": "aafwk", # Selected subsystem
"components": [
{ "component": "ability", "features":[ "enable_ohos_appexecfwk_feature_ability = true" ] }
# Selected components and component property configuration
]
},
{
...
}
...
}
}
Machine Translated by Google
vendor/company/product/fs.yml, this file is used to configure the file system image production process and package the compiled product into a
file system image, such as the user state root file system rootfs.img and the readable and writable userfs.img. It consists of multiple lists, one for each file
fs_dir_name: Required, declares the file system file name, such as rootfs, userfs fs_dirs:
Optional, configures the mapping relationship between the file directory under out and the file system file directory, each file directory corresponds
to a list source_dir: Optional, the target file under out Directory, if missing, an empty directory will be created under the file system
declares that the copy ignores the file dir_mode: Optional, file
directory permissions, default 755 file_mode: Optional , the permissions of all files in the
file directory, default 555 fs_filemode: Optional, configure files that require special permission declarations,
file system soft link fs_make_cmd: Required, configuration requires making a file system script. The script provided by the OS is under build/lite/
make_rootfs, and supports linux, liteos kernel and ext4, jffs2, and vfat formats. It also supports chip solution
manufacturer customization. fs_attr: Optional, dynamically adjust the file system according to configuration items
The fs_symlink and fs_make_cmd fields support the following variables: ${root_path}
The ${fs_dir} file system directory is composed of the following variables: ${root_path}, ${fs_dir_name}. fs.yml is optional and does
vendor/company/product/BUILD.gn, the entry point for product compilation, mainly used to compile solution vendor sources
code and copy the startup configuration file. If a product is selected as a product to be compiled, BUILD.gn in the corresponding product
directory will be compiled by default. A typical product build BUILD.gn should look like this:
group("product") { # The target name must be consistent with the product name, that is, the name of the third-level directory
deps = []
# Copy init
configuration deps +=
[ "init_configs" ] # Others
...
bright.
hb set command.
hb set -h
-h, --help show this help message and exit -root [ROOT_PATH],
--root_path [ROOT_PATH]
Set OHOS root path
-p, --product Set OHOS board and kernel
There are no parameters after hb set, and the default setting process is entered. hb set -root dir can directly set the code root directory, and hb set - p sets the
hb
[OHOS INFO] root path: xxx
[OHOS INFO] board: hispark_taurus
[OHOS INFO] kernel: liteos
[OHOS INFO] product: ipcamera
[OHOS INFO] product path: xxx/vendor/hisilicon/ipcamera [OHOS
INFO] device path: xxx/device/hisilicon/hispark_taurus/sdk_linux_4.19
hb build -h
usage: hb build [-h] [-b BUILD_TYPE] [-c COMPILER] [-t [TEST [TEST ...]]] [--
dmverity] [-p PRODUCT] [-f] [-n]
[ component [component ...]]
positional arguments:
component name of the component
optional arguments:
-h, --help show this help message and exit -b BUILD_TYPE,
--build_type BUILD_TYPE release or debug
version -t [TEST [TEST ...]],
--test [TEST [TEST ...] ]
compile test suit --
dmverity Enable dmverity
-p PRODUCT, --product PRODUCT
build a specified product with
{product_name}@{company}, eg: ipcamera@hisilcon -f, --
full full code compilation -T [TARGET
[TARGET ...]], --target [TARGET [TARGET ...] ]
Compile single target
There are no parameters after hb build. It will be compiled according to the set code path and product. The compilation options will remain the same as before.
consistent. The -f option will delete all compiled products of the current product, which is equivalent to hb clean + hb build. hb build {component_name}:
Compile components separately based on the board and core corresponding to the set product. hb build -p ipcamera@hisilicon: Compile the product without
set. This command can skip the set step and compile the product directly. Executing hb build separately under device/device_company/board will enter
the kernel selection interface. After the selection is completed, an image containing only the kernel and driver will be compiled based on the board in the current path
hb clean clears the compiled products of the corresponding products in the out directory, leaving only args.gn and build.log. Clear specified path
You can enter the path parameter: hb clean out/board/product. By default, the out path corresponding to the product of the current hb set will be cleared.
hb clean
usage: hb clean [-h] [out_path]
positional arguments:
out_path clean a specified path.
optional arguments:
-h, --help show this help message and exit
This section introduces how to add a component. First determine the subsystem and component name to which the component belongs, and then follow the steps below
New. After the source code development is completed, add the component compilation script.
Taking the compiled component hello_world executable file as an example, applications/sample/hello_world/BUILD.gn can be written as:
executable("hello_world")
{ include_dirs =
[ "include",
]
sources =
[ "src/hello_world.c"
]
}
Compile the script as above to compile an executable file named hello_world that can be run on HarmonyOS.
pieces. To compile the component separately, use hb set to select any product, and then use the -T option to compile the component separately:
hb build -f -T //applications/sample/hello_world
After the functional verification of the component on the development board is completed, the component can be configured into the product according to the following steps.
Add component description. The component description is located under build/lite/components. New components need to be added to the corresponding subsystem.
Take adding the hello_world component to the application subsystem as an example, add the hello_world pair in applications.json
elephant:
{
"components": [
Machine Translated by Google
{
"component": "hello_world",
"description": "Hello world.",
"optional": "true",
"dirs":
[ "applications/sample/hello_world"
], "targets":
[ "//applications/sample/hello_world"
]
},
...
]
}
Configure components into products. The product configuration file config.json is located under vendor/company/product/. The product configuration file must
contain the product name, HarmonyOS version number, device manufacturer, development board, kernel type, kernel version number, and configured subsystems and
components. Take adding the hello_world component to the product configuration file my_product.json as an example, and add the hello_wolrd object:
{
"product_name": "hello_world_test",
"ohos_version": "OpenHarmony 1.0",
"device_company": "hisilicon",
"board": "hispark_taurus",
"kernel_type": "liteos_a",
"kernel_version": "1.0.0" ,
"subsystems": [
{
"subsystem": "applications",
"components":
[ { "component": "hello_world", "features":[] }
]
},
...
]
}
Compile the product. 1. Enter hb set in the code root directory to select the corresponding product. 2. Execute hb build.
Create a chip solution catalog. Create a directory according to the chip solution configuration rules, and use the chip manufacturer realtek's
mkdir -p device/realtek/rtl8720
Machine Translated by Google
Create the kernel adaptation directory and write the development board compilation configuration config.gni file. Developed with realtek’s “rtl8720”
Taking the liteos_m adaptation of the board as an example, the content of device/realtek/rtl8720/liteos_a/config.gni is as follows:
# Kernel version.
kernel_version = "3.0.0"
# The toolchain path instatlled, it's not mandatory if you have added toolchian path to your ~/.bashrc.
board_toolchain_path =
rebase_path("//prebuilts/gcc/linux-x86/arm/gcc-arm-none-eabi/bin",
root_build_dir)
# Compiler prefix.
board_toolchain_prefix = "gcc-arm-none-eabi-"
Write a compilation script. Create BUILD.gn in the development board directory, and the target name should be consistent with the name of the development board. by
Taking realtek's rtl8720 development board as an example, the content of device/realtek/rtl8720/BUILD.gn can be:
Compile chip solution. Execute hb build in the development board directory to start compilation of the chip solution.
Machine Translated by Google
Down.
Create a product catalog. Create a product catalog according to the product solution configuration rules to base on the "rtl8720" development board
Taking the wifiiot module as an example, execute in the code root directory:
mkdir -p vendor/my_company/wifiiot
Assemble products. Create a new config.json file in the newly created product directory. Taking wifiiot in step 1 as an example, vendor/my_company/
"subsystems": [ {
"components": [
{ "component": "liteos_m", "features":[] } # Selected components and component features
] },
...
{
More subsystems and components
}
]
}
Note: Before compiling the build system, the device_company, board, kernel_type, kernel_version, subsystem, and component fields
will be checked for validity. The device_company, board, kernel_type, and kernel_version should match the known chip solutions, and the subsystem and component
Adapt OS interface. Create the hals directory under the product directory, and put the source code and compilation script of the product solution for OS adaptation
Configure system services. Create the init_configs directory under the product directory and create
Configure the init process (required for linux kernel only). Create the etc directory under the init_configs directory, and then create the init.d folder and fstab file under
etc. Finally, create and edit the rcS file and Sxxx file under the init.d file according to product requirements.
Machine Translated by Google
Configure the file system image (optional, only required for development boards that support file systems). Create fs.yml in the product directory
document. fs.yml needs to be configured according to the actual situation of the product. A typical fs.yml file is as follows:
fs_dirs:
-
# Copy the files in the compiled out/my_board/my_product/bin directory to rootfs/bin, and ignore the test bin source_dir:
bin target_dir: bin
ignore_files: -
Test.bin
- TestSuite.bin
-
# Copy the compiled files in the out/my_board/my_product/libs directory to rootfs/lib, ignoring all .a files
file and set the file and folder permissions to 644 and 755
source_dir: libs
target_dir: lib
ignore_files:
- .a
dir_mode: 755
file_mode: 644
-
source_dir: usr/lib
target_dir: usr/lib
ignore_files:
- .a
dir_mode: 755
file_mode: 644
-
source_dir: config
target_dir: etc
-
source_dir: system
target_dir: system
-
source_dir: sbin
target_dir: sbin
-
source_dir: usr/bin
target_dir: usr/bin
-
source_dir: usr/sbin
target_dir: usr/sbin
-
target_dir: mnt
-
target_dir:opt
-
target_dir: tmp
-
target_dir: var
-
target_dir: sys
-
source_dir: etc
target_dir: etc
-
source_dir: vendor
target_dir: vendor
-
target_dir: storage
fs_filemode:
-
file_dir: lib/ld-uClibc-0.9.33.2.so
file_mode: 555
-
file_dir: lib/ld-2.24.so
file_mode: 555
-
file_dir: etc/init.cfg
file_mode: 400
fs_symlink:
-
link_name: ${fs_dir}/lib/ld-musl-arm.so.1
-
source: mksh
link_name: ${fs_dir}/bin/sh
-
source: mksh
link_name: ${fs_dir}/bin/shell
fs_make_cmd: #
Use a script to make rootfs into an image in ext4 format -
fs_dir_name: userfs
fs_dirs:
-
source_dir: storage/etc
Machine Translated by Google
target_dir: etc
-
source_dir: data
target_dir: data
fs_make_cmd:
- ${root_path}/build/lite/make_rootfs/rootfsimg_linux.sh ${fs_dir} ext4
Write a compilation script. Create the BUILD.gn file in the product directory and write the script according to the actual product situation. Take step 1
name deps
deps += [ "hals" ] #
Others
...
}
Compile the product. Execute hb set in the code root directory, select the new product as prompted, and then execute hb build to start compilation.
The distributed task scheduling module establishes a distributed service platform on the HarmonyOS operating system through the master-slave device service agent
mechanism, supporting the master device (smart screen device equipped with HarmonyOS) to start the slave device (IP Camera, sports watch and other small memory HarmonyOS
equipment) FA capabilities. Take the smart screen program start reminder as an example. In the favorite program menu on the smart screen, click the "Remind me when it starts"
button. After the program starts, the smart screen will pull up the program start reminder FA on the sports watch. Through this FA, users can quickly know that their favorite program
There are two basic concepts involved in the distributed remote boot process. FA represents the interfaced Ability that can be used to interact with users. Remote startup
The specific process of realizing distributed remote startup is shown in the figure below.
The details of the code to implement distributed remote startup are as follows:
want.setFlags(Want.FLAG_ABILITYSLICE_MULTI_DEVICE); Distributed //Set the distributed tag. If it is not set, it cannot be used.
capabilities
startAbility(want); // Start the specified FA according to Want. The naming of Want parameters is based on the API of the actual development platform.
Distributed remote startup has the following restrictions: it supports the master device side to remotely start the slave device side FA, but does not support the slave
device to remotely start the master device FA; before remote startup, you must ensure that the distributed networking between HarmonyOS devices is successful (needs to be on
the same network) segment, they can ping each other), otherwise they cannot be started remotely; currently only the FA between master and slave devices with common public
key information is supported (that is, the FA of the master and slave devices use the same Huawei certificate).
The smart screen side starts the slave device side FA through the following operations. The slave device side FA has been developed by default.
The first step is to open DevEco Studio and complete the FA development on the smart screen side. The second step is to obtain the device of the target online slave device.
ID.
ohos.distributedschedule.interwork.DeviceManager;
List<DeviceInfo> deviceInfoListOnline =
if (deviceInfoListOnline.size() > 0)
remote_device_id = deviceInfoListOnline[0].GetDeviceId(); // Get the device ID of the first device in the online list
The third step is to construct want. First, use the ElementName class to indicate the remote device ID and package name that need to be started.
Ability class name, pass it into want, and then set the distributed flag Want.FLAG_ABILITYSLICE_MULTI_DEVICE
//Introduce relevant
ohos.bundle.ElementName;
FA Want want = new Want(); // Encapsulate the Want that starts the remote
want.setFlags(Want.FLAG_ABILITYSLICE_MULTI_DEVICE); // Set distributed flags. If not set, it will not be available. Distributed capabilities
startAbility(want); // Start the specified FA according to Want. The naming of Want parameters is subject to the actual development platform API.
Machine Translated by Google
The public base library stores common basic components of HarmonyOS. These basic components can be used by various HarmonyOS businesses
Used by subsystems and upper-layer applications. Common base libraries provide different capabilities on different platforms. LiteOS-M kernel
(Hi3861 platform): KV storage, file operations, IoT peripheral control, Dump system properties. LiteOS-A kernel (Hi3516, Hi3518 platform): JS API
for KV storage, timer, data and file storage, and Dump system properties.
Taking file operations as an example, common file operations include creating or opening files, writing files, closing files, and deleting files.
// seek
ret = UtilsFileSeek(fd, 5, SEEK_SET_FS);
printf("lseek ret = %d\n", ret);
// stat
int fileLen = 0;
ret = UtilsFileStat(fileName, &fileLen);
printf("file size = %d\n", fileLen);
// delete
ret = UtilsFileDelete(fileName);
printf("delete ret = %d\n", ret);
// set
char key[] = "rw.sys.version_100";
char value[] = "Hello kv operation implement!"; int ret
= UtilsSetValue(key, value);
printf("UtilsSetValue set ret = %d\n ", ret);
// get
char temp[128] = {0};
ret = UtilsGetValue(key, temp, 128);
printf("UtilsGetValue get ret = %d, temp = %s\n", ret, temp);
Machine Translated by Google
// delete
ret = UtilsDeleteValue(key);
printf("UtilsDeleteValue delete ret = %d\n", ret);
The first step is to develop a native application of KV storage based on AbilityKit. Write users based on the interface provided by KV storage
// set
char key[] = "rw.sys.version_100";
char value[] = "Hello kv operation implement!"; int ret
= UtilsSetValue(key, value);
printf("UtilsSetValue set ret = %d\n ", ret);
// get
char temp[128] = {0};
ret = UtilsGetValue(key, temp, 128);
printf("UtilsGetValue get ret = %d, temp = %s\n", ret, temp);
// delete
ret = UtilsDeleteValue(key);
printf("UtilsDeleteValue delete ret = %d\n", ret);
{
"app":
{ "bundleName": "com.huawei.launcher",
"vendor": "huawei",
"version":
{ "code": 1,
"name": "1.0"
}
}, "deviceConfig":
{ "default": {
"reqSdk":
{ "compatible": "zsdk 1.0.0",
"target": "zsdk 1.0.1"
}, "keepAlive": false
}, "smartCamera":
{ "reqSdk":
{ "compatible": "zsdk 1.0.0",
"target": "zsdk 1.0.1"
},
Machine Translated by Google
"keepAlive": false
}
}, "module": {
"package": "com.huawei.launcher",
"name": ".MyHarmonyAbilityPackage",
"deviceType":
[ "phone", "tv","tablet", "pc","car","smartWatch", "sportsWatch","smartCamera"
], "distro":
{ "deliveryWithInstall": true,
"moduleName": "Launcher",
"moduleType": "entry"
}, "abilities": [{
"name": "MainAbility",
"icon": "res/drawable/phone.png",
"label": "test app 1",
"launchType": "standard",
"type": "page"
},
{
"name": "SecondAbility",
"icon": "res/drawable/phone.png",
"label": "test app 2",
"launchType": "standard",
"type": "page"
},
{
"name": "ServiceAbility",
"icon": "res/drawable/phone.png",
"label": "test app 2",
"launchType": "standard",
"type": "service"
}]
}
}
Generate hap package. Store files according to the following directory structure, and place resource files under res/drawable:
Pack the above files into a zip package and modify the suffix to .hap, for example Launcher.hap. The second step is
to connect the board and send the command to install the KV storage native application to the board through the serial port.
Machine Translated by Google
Finally, the command to run the KV storage native application is sent to the board through the serial port.
AT+SYSPARA
The LiteOS-M platform dump system property output is shown in the figure below.
./bin/os_dump --help
Add parameter -l to os_dump to check which modules in the current system support obtaining attributes.
./bin/os_dump -l
./bin/os_dump syspara
The LiteOS-A platform dump system property output is shown in the figure below.
Machine Translated by Google
development board in turn. This way of doing business is inefficient and error-prone. Now you can use a new solution. If the development board can be connected to the network, you can
consider using remote upgrade to upgrade a batch of development boards to the same set of applications. This not only improves efficiency but also ensures that the software is consistent
OTA (full name: Over the Air) provides the ability to remotely upgrade devices, allowing various devices (such as IP cameras
etc.), easily supports remote upgrade capability. Currently, HarmonyOS only supports full package upgrades and does not currently support differential package upgrades. The
full package upgrade is to make all the contents of the new system into an upgrade package and upgrade it. The differential package upgrade is to make the difference content of the new
The specific steps of OTA upgrade are shown in the figure below.
Constraints and limitations of OTA upgrade: support open source packages based on Hi3861/Hi3518EV300/Hi3516DV300 chips
pieces. For Hi3518EV300/Hi3516DV300 open source kit, the device needs to support SD card (VFAT format).
Machine Translated by Google
Preparation work: On a Windows PC, download and install the OpenSSL tool and configure environment variables. OpenSSL
tools\update_tools\update_pkg_tools directory, download the upgrade package production tool and save it to
As shown in the figure below, run Generate_public_private_key.bat under ota_tools\key to generate the public key
Metis_PUBLIC.key, private key private.key and the array public_arr.txt file corresponding to the public key. Please keep the private key
private.key properly.
0x30,0x82,0x1,0xa,0x2,0x82,0x1,0x1,0x0,0xc7,0x8c,0xf3,0x91,0xa1,0x98,0xbf,0xb1,0x8c,
0xbe,0x22,0xde,0x32,0xb2,0xfa,0xec, 0x2c,0x69,0xf6,0x8f,0x43,0xa7,0xb7,0x6f,0x1e,0x4a,0x97,
0x4b,0x27,0x5d,0x56,0x33,0x9a,0x73,0x4e,0x7c,0xf8,0xfd,0x1a,0xf0,0 xe4, 0x50,0xda,0x2b,0x8,
0x74,0xe6,0x28,0xcc,0xc8,0x22,0x1,0xa8,0x14,0x9,0x46,0x46,0x6a,0x10,0xcd,0x39,0xd,0xf3,
0x4a,0x7f,0x1, 0x63,0x21,0x33,0x74,0xc6,0x4a,0xeb,0x68,0x40,0x55,0x3,0x80,0x1d,0xd9,0xbc,
0xd4,0xb0,0x4a,0x84,0xb7,0xac,0x43,0x1d,0x76,0x3 a, 0x61,0x40,0x23,0x3,0x88,0xcc,0x80,0xe,
0x75,0x10,0xe4,0xad,0xac,0xb6,0x4c,0x90,0x8,0x17,0x26,0x21,0xff,0xbe,0x1,0x82,0x16, 0x76,
0x9a,0x1c,0xee,0x8e,0xd9,0xb0,0xea,0xd5,0x50,0x61,0xcc,0x9c,0x2e,0x78,0x15,0x2d,0x1f,0x8b,
0x94,0x77,0x30,0x39,0x70,0x cf, 0x16,0x22,0x8,0x99,0x7c, 0xe2,0x55,0x37,0x76,0x9e, 0x4b, 0xfe,
0x48,0x26,0xc, 0xd9,0x59,0x6F, 0x77,0x92,0xdd, 0xcee 0xce ,, 0x23,0x68,0x8,0xbd, 0xd4, 0xeb,
0x5,0x1b, 0x2a, 0x7e, 0xda, 0x59,0x93,0x7b, 0x4d, 0x19,0x89,0x8d, 0x7d, 0xbc, 0x7d, 0x7d, 0x7d,
0x7d, 0xbc, 0x3,0x1f,0x77,0xe6,0x3d,0xa5,0x32,0xf5,0x4,0xb7,0x9c,0xe9,0xfa,0x6e,0xc,0x9f,
0x4,0x62,0xfe,0x2a,0x5f,0xbf,0xeb,0x9a,0x73, 0xa8,0x2a,0x72,0xe3,0xf0,0x57,0x56,0x5c,0x59,
0x14,0xdd,0x79,0x11,0x42,0x3a,0x48,0xf7,0xe8,0x80,0xb1,0xaf,0x1c,0x40,0xa2,0xc6 , 0xec,0xf5,
0x67,0xc1,0x88,0xf6,0x26,0x5c,0xd3,0x11,0x5,0x11,0xed,0xb1,0x45,0x2,0x3,0x1,0x0,0x1,
#definePUBKEY_LENGTH 270
For the Hi3518EV300/Hi3516DV300 kit, based on the previous step, you also need to use the public_arr.txt
In the ota_tools\Components directory, place the files that need to be upgraded. The following figure shows the original image placement location.
The file names and corresponding descriptions in the package are as follows.
u-boot.bin is obtained by renaming the compiled u-boot-hi351XevX00.bin file. kernel.bin is obtained by renaming the compiled liteos.bin/kernel file.
rootfs.img is obtained by renaming the rootfs_xxxxx.img file generated by compilation. config is related to the development board type and kernel type. Please
OTA.tag has a total of 32 bytes, and the content is: "package_type:otaA1S2D3F4G5H6J7K8"; the last 16 bytes are random numbers and need to change
Modify the packet_harmony.xml file under ota_tools\xml and configure the compAddr partition name corresponding to the file of
ota_tools\Components\. Other items do not need to be modified and are reserved as extension items. Example, configure component information
<group name="own">
<list>
<component compAddr="rootfs" compId="0x0017" resType="0x05" isDelete="0x00"
compType ="0x00" compVer="1.0">.\Components\rootfs_jffs2.img</component>
<component compAddr="kernel_A" compId="0x0018" resType="0x05" isDelete="0x00" compType
="0x00" compVer="1.1">.\Components\liteos.bin</component> <component
compAddr="data " compId="0x0019" resType="0x05" isDelete="0x00"
compType ="0x00" compVer="1.2">.\Components\userfs_jffs2.img</component>
</list>
Configure the generated public and private key paths into packet_harmony.xml under the ota_tools\xml path. Example, configure
<encryption>
<privateKey type="der">.\key\private.key</privateKey>
<publicKey type="der">.\key\Metis_PUBLIC.key</publicKey>
</encryption>
Set the product name and software version number in ota_tools\VersionDefine.bat (for anti-rollback verification). Example, configure product name
set FILE_PRODUCT_NAME=Hisi
Execute Make_Harmony_PKG.bat under ota_tools to generate the upgrade package Hisi_OpenHarmony 1.1.bin. The
upgrade package is signed using SHA256+RSA2048 to ensure integrity and legality. The picture below shows the upgrade package creation
tool.
Upload the upgrade package Hisi_OpenHarmony 1.1.bin to the manufacturer's OTA server.
The manufacturer application downloads Hisi_OpenHarmony 1.1.bin from the OTA server.
For Hi3518EV300/Hi3516DV300 open source kit, an SD card (capacity >100MBytes) needs to be inserted.
Call the dynamic library libhota.so of the OTA module, and the corresponding header file is located at:
base\update\ota_lite\interfaces\kits\hota_partition.h&hota_updater.h;
Development board, please refer to the HAL layer header file: base\update\ota_lite\hals\hal_hota_board.h.
The upgrade package is produced according to the "Generating Public and Private Key Pairs" and "Generating Upgrade Packages" chapters above. After downloading and
obtaining the current device upgrade package, the application side calls the HotaInit interface to initialize the OTA module. Call the HotaWrite interface to pass in the upgrade package
data stream, and the interface implements verification, parsing, and writing of the upgrade data stream. After writing is completed, call the HotaRestart interface to restart the system.
Machine Translated by Google
system. During the upgrade process, use the HotaCancel interface to cancel the upgrade. The sample code uses HarmonyOS's "upgrade
} int offset = 0;
int fileLen = lseek(fd, 0, SEEK_END);
int leftLen = fileLen;
while (leftLen > 0) { if
(lseek(fd, offset, SEEK_SET) < 0)
{ close(fd);
printf ("lseek fail!\r\n");
(void)HotaCancel();
return -1;
} offset += READ_BUF_LEN;
leftLen -= tmpLen;
} close(fd);
printf("ota write finish!\r\n");
printf("device will reboot in 10s...\r\n");
sleep(10);
(void)HotaRestart();
Machine Translated by Google
return 0;
}
The upgrade package is not produced according to the "Generating Public and Private Key Pairs" and "Generating Upgrade Packages" chapters above, but is produced through other methods.
of. After the application side downloads and obtains the current device upgrade package, it calls the HotaInit interface to initialize it. Use the HotaSetPackageType interface to set
NOT_USE_DEFAULT_PKG and use the "customization" process. Call the HotaWrite interface to pass in the upgrade package data stream and write it to the device. After the writing is
completed, call the HotaRead interface to read the data, and the manufacturer can verify the upgrade package by itself. Call HotaSetBootSettings to set the boot flag, which is used when you need to enter
uboot mode after restarting (optional). Call the HotaRestart interface to restart. During the upgrade process, use the HotaCancel interface to cancel the upgrade. The sample code uses the non-
} (void)HotaSetPackageType(NOT_USE_DEFAULT_PKG);
int fd = open(OTA_PKG_FILE, O_RDWR, S_IRUSR | S_IWUSR);
if (fd < 0) {
printf("file open failed, fd = %d\r\n", fd);
(void)HotaCancel();
return -1;
} int offset = 0;
int fileLen = lseek(fd, 0, SEEK_END);
int leftLen = fileLen;
while (leftLen > 0) { if
(lseek(fd, offset, SEEK_SET) < 0)
{ close(fd);
printf ("lseek fail!\r\n");
(void)HotaCancel();
return -1;
(void)HotaCancel();
return -1;
} offset += READ_BUF_LEN;
leftLen -= tmpLen;
} close(fd);
printf("ota write finish!\r\n");
leftLen = fileLen;
while (leftLen > 0) { int
tmpLen = leftLen >= READ_BUF_LEN ? READ_BUF_LEN : leftLen;
(void)memset_s(g_readBuf, READ_BUF_LEN, 0, READ_BUF_LEN); if
(HotaRead(offset, READ_BUF_LEN, (unsigned char *)g_readBuf) != 0) {} printf("ota
write fail!\r\n");
(void)HotaCancel();
return -1;
The manufacturer's application calls the API of the OTA module, and the OTA module performs signature verification of the upgrade
package, version rollback prevention, and programming and disk functions. After the upgrade is completed, the system automatically restarts.
For the Hi3518EV300/Hi3516DV300 open source kit, in the version that needs to implement the anti-rollback function, the value of
LOCAL_VERSION needs to be increased, such as "ohos default 1.0"->"ohos default 1.1", LOCAL_VERSION is in
{ #if defined(CONFIG_TARGET_HI3516EV200) || \
defined(CONFIG_TARGET_HI3516DV300) || \
defined(CONFIG_TARGET_HI3518EV300)
#define LOCAL_VERSION "ohos default 1.0" /* increase: default release version */
Machine Translated by Google
The startup recovery subsystem is responsible for the startup process of system key service processes from after the kernel startup to before the application startup and
The function of restoring the device to factory settings. The startup recovery subsystem involves the following components: Init startup boot component, Appspawn startup
boot component, Bootstrap service startup component, Syapara system property component, Startup startup component.
The process corresponding to the init startup boot component is the init process, which is the first user-mode process started after the kernel completes
initialization. After the init process is started, read the init.cfg configuration file, execute the corresponding commands (see description in Chapter 2, Table 2) according to the
parsing results, and start each key system service process in sequence. When starting the system service process, set its corresponding permissions.
The appspawn application incubation component is responsible for receiving commands from the user program framework to incubate application processes and setting the permissions of new processes.
The bootstrap service startup component provides startup entry identifiers for each service and function. When SAMGR starts, the entry function identified by boostrap
syspara system property component provides an interface for obtaining device information according to HarmonyOS product compatibility specifications.
Such as: product name, brand name, manufacturer name, etc., and also provides an interface for setting/reading system properties.
The init startup boot component is responsible for starting key service processes during the system startup phase. If the user needs to add a new system service that
starts automatically at boot, the new service can be added to the configuration file init.cfg. Init The process corresponding to the startup boot component is the init process,
which is the first user-mode process started after the kernel completes initialization. After the init process is started, read the init.cfg configuration file, execute the
corresponding commands according to the parsing results, and start each key system service process in sequence. When starting the system service process, set its
corresponding permissions. The operating mechanism of the Init startup boot component is shown in the figure below.
The development guidelines for the init startup boot component are as follows.
Configure jobs array. The init startup boot component divides system startup into three stages: pre-init stage: operations that need to be performed before starting
system services, such as mounting file systems, creating folders, modifying permissions, etc. init phase: system service startup phase. post-init phase: operations that need to
Each of the above stages is represented by a job, and a job corresponds to a command set. Init completes system initialization by executing the commands in each
job in sequence. Job execution order: execute "pre-init" first, then "init", and finally "post-init".
"jobs" :
[{ "name" : "pre-init",
"cmds" :
[ "mkdir /testdir",
"chmod 0700 /testdir",
"chown 99 99 /testdir",
"mkdir /testdir2",
Machine Translated by Google
"mount vfat /dev/mmcblk0p0 /testdir2 noexec nosuid" // mount command, the format is: mount
file system type source target flags data
]
}, {
"name" : "init",
"cmds" : [
"start service1",
"start service2"
]
}, {
"name" : "post-init",
"cmds" : []
}
],
The job name and corresponding description are as follows. pre-init is the first job to be executed, if the developer's process needs
First perform some operations (such as creating a folder), you can put the operations in pre-init and execute them first. init is intermediate execution
job, such as service startup. post-init The last executed job, if the developer's process needs it after the startup is completed
For some processing (such as mounting the device after driver initialization), such operations can be executed in this job. Maximum number of single jobs
Supports 30 commands (currently only support start/mkdir/chmod/chown/mount/loadcfg), the command name and the following
There can be only one space between the parameters (parameter length ÿ 128 bytes).
Configure services array, service collection (array form), which contains all system services that the init process needs to start.
"services" : [{
"name" : "service1",
"path" : ["/bin/process1", "param1", "param2"], "uid" :
1, "gid" :
1, "once" :
0,
"importance" : 1,
"caps" : [0, 1, 2, 5]
}, {
"name" : "service2",
"path" : "/bin/process2",
"uid" : 2,
"gid" : 2,
"once" : 1,
"importance" : 0,
"caps" : [ ]
}
]
The service field and its corresponding description are as follows. name represents the service name of the current service, which must be non-empty and length <=32
byte. path The full path and parameters of the executable file of the current service, in array form. Make sure the first array element is executable
Line file path, number of array elements <=20, each element in string form, and length of each string <=64 bytes.
uid The uid value of the current service process. gid The gid value of the current service process. once Whether the current service process is a one-time process
Process: The value 1 indicates a one-time process. When the process exits, init will not restart the service process; the value 0 indicates
Machine Translated by Google
Resident process. When the process exits, init receives the SIGCHLD signal and restarts the service process; note that for permanent
If a resident process exits 5 times within 4 minutes, init will not restart the service process when exiting for the fifth time.
Importance is whether the current service process is a key system process: a value of 0 indicates a non-critical system process. When the process exits
When exiting, init will not reset the system; the value 1 indicates a key system process. When the process exits, init will reset the system.
bit restart. caps is the capability value required by the current service. Based on the capabilities supported by the security subsystem, all
The required capabilities are configured according to the principle of least privilege (currently, up to 100 values can be configured).
Development examples are as follows. init starts the boot program. Here we want to add a system service named MySystemApp.
"jobs" : [{
"name" : "pre-init",
"cmds" : [
"mkdir /storage/MyDir", init" // The folder needs to be created before the MySystemApp service is started, so it is placed in "pre-
"chmod 0600 /storage/MyDir", // The MySystemApp service requires that only this user and group be added to this file.
"chown 10 10 /storage/MyDir"
]
}, {
"name" : "init",
"cmds" : [
]
}, {
"name" : "post-init",
"cmds" : [] // No other operations are required after the MySystemApp system service is started, so no configuration is required
"post-init"
}
],
"services" : [{
["/bin/MySystemAppExe", "param1", "param2", "param3"], the executable file path of // MySystemApp system server
the task is "/bin/MySystemAppExe", and its startup requires Pass in three parameters, namely "param1", "param2"
and "param3
"once" : 0, if // The non-disposable process of MySystemApp system service, that is, if the MySystemApp system service
it exits for any reason, the init process needs to restart it
"importance": 0, the service //The MySystemApp system service is not a key system process, that is, if the MySystemApp system
exits for any reason, the init process does not need to restart the board
"caps" : [] // The MySystemApp system service does not require any capability permissions (i.e. MySystemApp
System services do not involve capability-related operations)
}
]
}
Machine Translated by Google
After completing the configuration, compile the package and burn the board: After starting, use task -a (liteos-a version) or ps command (linux
version) to check whether the MySystemApp system service process has been started. Use the kill command to kill the newly added
MySystemApp process and observe whether the process will be restarted (it should be restarted here). Use the kill command to kill the newly added
MySystemApp process and observe whether it causes the board to restart (it should not restart here).
limit, and calls the entry function of the application framework. When Appspawn is started by Init, it registers the service name with the IPC (Inter-Process
Communication) framework, then waits to receive inter-process messages, starts the application service based on the message parsing results, and grants
it corresponding permissions.
The service name registered by appspawn is "appspawn". You can obtain the macro
header file. Under the security subsystem restriction rules, currently only the Ability Manager Service has the permission to send inter-process
messages to appspawn. The message received by appspawn is in json format, as shown below:
"{\"bundleName\":\"testvalid1\",\"identityID\":\"1234\",\"uID\":1000,\"gID\":1000,\"capability\":[0 ]}"
Field descriptions are as follows. bundleName represents the name of the application service process to be started, the length is ÿ7 bytes and
ÿ127 bytes. identityID is an identifier generated by AMS for the new process. It is transparently transmitted to the new process by appspawn. The length
is ÿ1 byte and ÿ24 bytes. uID The uID of the application service process to be started must be a positive value. gID The gID of the application service
process to be started must be a positive value. capability Capability permissions required by the application service process to be started, the number is
ÿ 10.
framework subsystem) starts, the entry function identified by boostrap will be called and the system service will be started. It realizes the automatic
initialization of the service, that is, the initialization function of the service does not need to be called explicitly. Instead, it is declared using a macro definition,
and it will be automatically executed when the system starts. The principle of Bootstarp is to place the service startup function in the predefined zInit code
segment through macro declaration. When the system starts, the OHOS_SystemInit interface is called, the code segment is traversed and the functions in it are called.
To add the zInit section, you can refer to the existing link script of the Hi3861 platform. The file path is
vendor/hisi/hi3861/hi3861/build/link/link.ld.S. The main service automatic initialization macros of bootstrap are described below. SYS_SERVICE_INIT(func)
identifies the initialization startup entry of the core system service. SYS_FEATURE_INIT(func) identifies the
initialization startup entry of the core system function. APP_SERVICE_INIT(func) identifies the initialization startup entry of the core system service.
Machine Translated by Google
Identifies the initialization startup entry of the application layer service, and APP_FEATURE_INIT(func) identifies the initialization startup entry of the application layer function.
void SystemServiceInit(void) {
printf("Init System Service\n");
}
SYS_SERVICE_INIT(SystemServiceInit);
void SystemFeatureInit(void) {
printf("Init System Feature\n");
}
SYS_FEATURE_INIT(SystemFeatureInit);
void AppServiceInit(void) {
printf("Init App Service\n");
}
APP_SERVICE_INIT(AppServiceInit);
void AppFeatureInit(void) {
printf("Init App Feature\n");
}
APP_FEATURE_INIT(AppFeatureInit);
The main function of the Syapara system properties component is to provide system properties related to obtaining and setting the operating system.
The platforms of LiteOS-M kernel and LiteOS-A kernel include: Hi3861 platform, Hi3516DV300 platform, Hi3518EV300 platform. Supported system properties
include: default system properties, OEM system properties, and custom system properties. The OEM part only provides default values, and the specific values need to be adjusted
The syspara system property interface is described as follows. int GetParameter(const char* key, const char* def, char* value, unsigned int len) gets the system
parameters. int SetParameter(const char* key, const char* value) sets/updates system parameters. char* GetProductType(void) returns the current device type. char*
GetManufacture(void) returns the current device manufacturer information. char* GetBrand(void) returns the current device brand information. char*
GetMarketName(void) returns the current device propagation name. char* GetProductSeries(void) returns the current device product series name. char* GetProductModel(void)
returns the current device certification model. char* GetSoftwareModel(void) returns the current device internal software submodel. char* GetHardwareModel(void) returns the
current device hardware version number. char* GetHardwareProfile(void) returns the current device hardware profile. char* GetSerial(void) returns the current
device serial number (SN number). char* GetOsName(void) returns the operating system name.
Machine Translated by Google
char* GetDisplayVersion(void) returns the software version number visible to the current device user. char*
GetBootloaderVersion(void) returns the current device Bootloader version number. char* GetSecurityPatchTag(void)
returns the security patch tag. char* GetAbiList(void) returns the instruction set (Abi) list supported by the current device. char* GetSdkApiLevel(void)
returns the SDK API level matching the current system software. char* GetFirstApiLevel(void) returns the first version of the system
software SDK API level. char* GetIncrementalVersion(void) returns the difference version number. char* GetVersionId(void)
returns the version id. char* GetBuildType(void) returns the build type. char* GetBuildUser(void) returns the build account username. char*
GetBuildHost(void) returns the build host name. char* GetBuildTime(void) returns the build time. char* GetBuildRootHash(void) returns the
// get sysparm
char* value1 = GetProductType();
printf("Product type =%s\n", value1);
free(value1);
char* value2 = GetManufacture();
printf("Manufacture =%s\n" , value2);
free(value2);
char* value3 = GetBrand();
printf("GetBrand =%s\n", value3);
free(value3);
char* value4 = GetMarketName();
printf("MarketName =% s\n", value4);
free(value4);
char* value5 = GetProductSeries();
printf("ProductSeries =%s\n", value5);
free(value5);
char* value6 = GetProductModel();
printf( "ProductModel =%s\n", value6);
free(value6);
char* value7 = GetSoftwareModel();
printf("SoftwareModel =%s\n", value7);
free(value7);
char* value8 = GetHardwareModel( );
printf("HardwareModel =%s\n", value8);
free(value8);
char* value9 = GetHardwareProfile();
printf("Software profile =%s\n", value9);
free(value9);
char * value10 = GetSerial();
Machine Translated by Google
attributes supported by large systems include: device information such as device type, product name, etc., system information such as system version,
5.7Softbus _
For traditional computers, the traditional bus is an internal structure that connects the CPU, memory, and input and output devices. All components of the host
share this communication path. Its structure is shown in the figure below.
In the traditional bus structure, these connected lines are real hardware connections. The soft bus draws lessons from the traditional bus.
A structure that connects a large number of devices without requiring real physical wiring connections. The structure of the soft bus is shown in the figure below. The soft
bus connects 1+8+N independent devices, including 1 mobile phone and 8 devices (car machine, speaker, headset, watch/bracelet, tablet, large screen, PC, AR/VR), N
types of IoT devices. The distributed soft bus has the characteristics of self-discovery, self-organizing network, high bandwidth and low latency.
Taking Bluetooth data transmission between a mobile phone and a computer as an example, the traditional method requires manually searching for the device on
the mobile phone or PC. After finding the device, perform Bluetooth pairing. After successful pairing, data transmission can be performed. The use of soft buses can complete
self-discovery and self-organizing networks without the user being aware of it. When users want to use their mobile phones and PCs to transfer data to each other, they
no longer need to find the device and connect and pair. They can do it directly. Data transmission effectively reduces tedious operation steps.
The main functions of the graphics subsystem of HarmonyOS are: providing basic UI components and container components. Including button, image,
label, list, animator, scroll view, swipe view, font, clock, chart, canvas, slider, layout, etc. Provides the ability to take screenshots and export component trees. The module
internally implements functions such as component rendering, animation, and input event distribution.
UI components implement various controls, such as buttons, text, progress bars and other basic controls, and provide interface switching, picture sequencing, etc.
Complex controls such as column frames realize grid layout and flexible layout (such as centering, left-aligned, right-aligned).
Machine Translated by Google
The layout is a one-time layout. Each time the layout function is run, the position of the control will be calculated. However, when the position of the control is
changed by other methods (such as dragging), the positions of other associated controls will not automatically change, and the layout function needs to be called again.
The principle of animation is that based on the tick event, the Task Manager periodically calls the callback function to process the attribute changes, and then
triggers a refresh to redraw the component to achieve the component animation effect. The animation function provides various operations such as start/stop, pause/
Input events include touch screen input events and physical key input events. Each time the GUI engine runs, Input
The Manager will read the input of all registered hardware devices once and convert them into various events for use by UI controls.
2D graphics rendering implements the drawing operations of lines, rectangles, triangles, and arcs. Graphics rendering implements the drawing API of various
types of pictures, such as RGB565, RGB888, ARGB8888, PNG, and JPG formats. For fonts, it supports real-time drawing of vector fonts.
The media subsystem is mainly divided into two parts, namely camera development and audio and video development.
The camera is one of the services provided by the HarmonyOS multimedia process. It provides the camera's video recording, preview, and photo taking functions,
Before developing a camera application, developers should understand the following basic concepts: Video streaming refers to a series of image data
A data stream is formed according to a fixed time interval. Each picture data becomes a frame, and such a frame is called a video frame; FPS frame rate (full name
Frames Per Second) represents the speed at which the video refreshes the picture every second. Or the number of frames per second of the video. The higher the frame
rate, the smoother the video looks; resolution describes the number of pixels in a picture. For example
A 1920*1080 (1080P) picture is 1920 pixels wide and 1080 pixels high.
As a system service, the multimedia service is launched by the Init process when the system starts, and initializes and allocates media hardware resources
(memory/display hardware/image sensor/codec, etc.). The initialization process parses the configuration file and determines the capabilities and resource limits of each
multimedia service. This is usually configured by the OEM manufacturer through the configuration file.
The camera service has the following configuration items when the multimedia process is initialized:
Memory pool: All media services rely on memory rotation in the memory pool to run;
The creation process of Camera is roughly as follows: CameraManager creates a Camera instance and binds the camera device from the server. After the
creation is successful, the developer is notified asynchronously. The timing process of Camera recording/preview is that the developer first creates the Camera through
CameraKit, and then uses the FrameConfig class to configure the recording or preview frame attributes.
Machine Translated by Google
HarmonyOS audio and video includes audio and video playback and recording. HarmonyOS audio and video playback module supports the development of audio and video
playback services, including audio and video file and audio and video stream playback, volume and playback progress control, etc. The HarmonyOS recording module supports the
development of audio and video recording services and provides functions related to audio and video recording, including setting the recording video screen size, audio and video
encoding bit rate, encoder type, video frame rate, audio sampling rate, recording file output format, etc. The specific functional modules of audio and video development are shown in
Before developing audio and video, you need to understand these basic concepts in advance:
Streaming media technology: encoding continuous image and sound information and placing it on the network server, allowing the viewer to
It is a technology that allows you to watch and listen to content simultaneously without waiting for the entire multimedia file to be downloaded.
Code rate: the number of data bits transmitted per unit time during data transmission. The common unit is kbps, which is thousands of bits per second.
Sampling rate (Hz): The number of samples per second that are extracted from a continuous signal and formed into a discrete signal. The higher the sampling rate, the
The AI business subsystem is the subsystem of HarmonyOS that provides native distributed AI capabilities. The AI business subsystem provides a unified AI engine framework
to realize rapid plug-in integration of algorithm capabilities. The framework mainly includes plug-in management, module management and communication management modules to
complete the life cycle management and on-demand deployment of AI algorithm capabilities. Plug-in management mainly implements plug-in life cycle management and on-demand
deployment of plug-ins, and quickly integrates AI capability plug-ins; module management mainly implements task scheduling and manages client instances; communication
management mainly implements cross-process communication between the client and the server. and data transmission between the engine and plug-ins. In the future, a unified AI
capability interface will be gradually defined to facilitate distributed invocation of AI capabilities. At the same time, it provides a unified reasoning interface that adapts to different
reasoning framework levels. The AI engine framework is shown in the figure below.
Machine Translated by Google
The AI engine framework includes three main modules: client, server and common. The client provides server-side connection management
functions. The northbound SDK needs to encapsulate and call the public interface provided by the client in the algorithm external interface; the server
provides functions such as plug-in loading and task management. Plugin implements the plug-in interface provided by the server to complete
plug-in access; common provides platform-related operating methods, engine protocols and related tool classes for calls by other modules. The structure
It provides functions such as sensor list query, sensor control, sensor subscription and de-subscription. The lightweight sensor service framework is shown in the figure below.
Sensor API: Provides the basic API of sensors, mainly including querying the list of sensors and subscribing/cancelling the number of sensors.
Sensor Framework: It mainly implements sensor subscription management, creation and destruction of data channels, etc., and realizes communication with the sensor service
layer.
Sensor Service: Mainly implements HDF layer data reception, analysis, distribution, management of device sensors, and data processing.
Taking the sensor with sensorTypeId as 0 as an example, other types of sensors are used in a similar manner. The specific steps for using the Sensor service are as follows:
#include "sensor_agent.h"
#include "sensor_agent_type.h"
The second step is to create a sensor callback function. The format of the callback function is RecordSensorCallback type:
if(event == NULL){
return;
SensorUser sensorUser;
sensorUser.callback = SensorDataCallbackImpl; //Member variable callback points to the created callback method
The sixth step is to subscribe to the sensor data. You can obtain the sensor data in the implemented callback method:
#include "sensor_agent.h"
#include "sensor_agent_type.h"
#include "stdio.h"
if(event == NULL){
return;
SensorUser sensorUser;
sensorUser.callback = SensorDataCallbackImpl;
SensorInfo *sensorInfo = (SensorInfo *)NULL; int32_t
count = 0; // Get the
sensor list of the device
} // Enable
sensor ret = ActivateSensor(0,
&sensorUser); if (ret != 0) {
printf("ActivateSensorfailed! ret: %d", ret);
return ret;
}
sleep(10); // Unsubscribe
sensor data ret = UnsubscribeSensor(0,
&sensorUser); if (ret != 0) {
printf("UnsubscribeSensor! ret: %d", ret);
return ret;
} // Disable sensor
ret = DeactivateSensor(0, &sensorUser); if
(ret != 0) {
printf("DeactivateSensor! ret: %d", ret);
return ret;
}
}
The user program framework is a development framework that HarmonyOS provides developers with to develop HarmonyOS applications. It includes two
The Ability subsystem is a development framework for managing the running status of HarmonyOS applications. The framework structure is shown in the figure below.
Machine Translated by Google
In the above framework diagram, Ability is the smallest unit of system scheduling application and a group that can complete an independent function.
components, an application can contain one or more Ability. Ability is divided into two types: Page type Ability and Service type Ability. The Page type of Ability has
an interface and provides users with human-computer interaction capabilities; the Service type of Ability does not have an interface and provides users with a background
task mechanism. AbilitySlice is the sum of a single page and its control logic. It is a unique component of Page type Ability. A Page type Ability can contain multiple
AbilitySlices. At this time, the business capabilities provided by these pages should be highly related. The relationship between Ability and
The package management subsystem is an installation package management framework provided by HarmonyOS for developers. The package management framework is as shown in the figure below.
Show.
Machine Translated by Google
BundleKit is the interface provided by the package management service to the outside world. It has an installation/uninstallation interface, a package information query interface,
and a package status change listen interface. The package scanner can be used to parse locally premade or installed installation packages and extract various information inside for
management and persistence by the management sub-module. The package installation submodule can be used to install, uninstall, and upgrade a package. The package installation
service is a separate process that communicates with the package management service through IPC. This service is used to create and delete installation directories and data
directories, etc., and has higher permissions. The package management submodule manages information related to installation packages and stores persistent package information.
Package security management sub-module: signature checking, permission granting, and permission management.
The security capabilities currently provided to developers by the HarmonyOS security subsystem mainly include application trust, permission management, device
Application signature verification module: In order to ensure the integrity of application content, the system controls the source of the application through application signatures
and profiles. At the same time, for debugging applications, it can also verify whether the UDID of the application and the device match through the signature verification interface to ensure
Application permission management module: Application permissions are a common way to manage applications' access to system resources and use of system capabilities.
During the development phase, the application needs to indicate in profile.json which permissions the application may call during operation, where static permissions represent You only
need to register during the installation phase, and dynamic permissions generally indicate that sensitive information is involved, so users are required to perform dynamic authorization.
IPC communication authentication module: System services expose interfaces to other processes through IPC. These interfaces need to be configured with corresponding access
policies. When other processes access these interfaces, the IPC communication authentication mechanism will be triggered to verify whether the accessing process has permission to
access. If you do not have permission for this interface, access to this interface will be denied.
Trusted device group management module: Provides the creation and query of device security and trust relationships with Huawei account groups and peer-to-peer groups (such
as QR codes, bumper, etc.) based on the group concept. Distributed applications can be based on This capability performs trusted authentication between devices and then requests a
Before developing applications that rely on signature verification components, developers should understand the following basic concepts:
Samgr (full name System Ability M2anager) system capability management, as a module for managing system capabilities on HarmonyOS, see the system service framework
subsystem for details. BMS (full name Bundle Manager Service) package management management, mainly responsible for application installation, uninstallation and data management
on HarmonyOS. HarmonyAppProvision description file, referred to as profile, HarmonyAppProvision is described in json file format. The leaf certificate is the certificate ultimately used
to sign the entire package or profile and is located at the end of the digital certificate chain. Applications to be put on the shelves refer to hap packages that developers apply for a release
certificate and release description file from the application market and sign with this, but are not officially released through the application market.
HarmonyOS self-signed applications are hap packages generated when developers compile HarmonyOS system applications themselves and use the original application description
files, as well as the public HarmonyOS public and private key pairs and certificates to self-sign.
The expansion of the square test framework mainly includes modules such as test case compilation, test case management, test case scheduling and distribution, test case execution,
test result collection, test report generation, test case template, test environment management and other modules.
Before developing the test subsystem, developers should first understand the following concepts:
Test case compilation supports compiling test case source code into binary files that can be executed on the device under test. Test case scheduling and distribution supports
distributing test cases to different devices under test through network port channels or serial port channels, and assigns a specific test case executor to each test case. The test case
executor is responsible for test case preprocessing, case execution, result recording and other execution logic. The test case template defines a unified format for test cases and test case
compilation configuration GN files. The test platform kit is a public method during the operation of the test platform, such as providing a test case directory to mount the file system to the
device under test, pushing test cases to the device under test, or obtaining test results from the device under test. Test report generation defines the developer's self-test report
template and generates web test reports. Test environment management supports management of the device under test through USB, serial port, etc., and its functions include device
The main contents included are: DFR (Design for Reliability, reliability) and DFT (Design for Testability, testability) features.
DFX provides HiLog flow logs, which are suitable for lightweight system devices (reference memory ÿ 128KB) and small system devices (reference memory
ÿ 1MB).
Before using DFX, you need to understand some concepts like this. Logs are some log information generated during system operation, which are used
by developers to understand the running process and status of the system or application. Distributed tracing: In a distributed system, the initiation of a business
often involves multiple software modules, which are controlled and transferred through intra-process, inter-process, and inter-device communication interfaces. In
order to facilitate developers to understand such complex processes, To understand and delimit issue tracing, DFX provides a distributed tracing framework.
During the running process, if the thread enters an infinite loop or falls into the kernel state (Uninterruptable Sleep, Traced, Zombie, etc., or other synchronous
waiting states), it will not be able to respond to normal business requests, and will not be able to realize fault perception and recovery by itself. Detecting and
locating such faults requires a simple watchdog mechanism to insert detection points in processes prone to stuck failures, and perform fault recovery and log
collection when stuck faults occur. Buried points is a technology that adds some code to the key processing flow of the program to collect information during the
running of the program to support the analysis of product usage. A system event is an identifier generated by a certain state of the HarmonyOS system. It is used to
suite) application compatibility test suite, and will be expanded to include dcts (device compatibility test suite) device compatibility test suite in the future.
The XTS subsystem currently includes the acts and tools packages. acts stores acts-related test case source code and configuration files. Its purpose
is to help terminal equipment manufacturers discover incompatibility between software and HarmonyOS as early as possible and ensure that the software meets
the compatibility requirements of HarmonyOS throughout the development process. Tools stores the acts-related test case development framework.
lightweight system is oriented to MCU processors such as Arm Cortex-M and RISC-V 32-bit devices. The hardware resources are extremely limited. The
minimum memory of the supported devices is 128KiB. It can provide a variety of lightweight systems. Network protocols, lightweight graphics framework, and
rich IOT bus reading and writing components, etc. Products that can be supported include connection modules, sensor devices, wearable devices, etc. in
The small system is oriented to application processors such as Arm Cortex-A devices. The minimum supported device memory is 1MiB, which can provide
higher security capabilities, standard graphics framework, and video encoding and decoding multimedia capabilities. Products that can be supported include IP
Cameras, electronic peepholes, routers in the smart home field, and driving recorders in the smart travel field.
Machine Translated by Google
6 Functional commissioning
The Shell provided by the LiteOS-A kernel of HarmonyOS supports common basic functions for debugging, including system, files,
Network and dynamic loading related commands. At the same time, the Shell of HarmonyOS's LiteOS-A kernel supports adding new commands and can be customized according
to needs.
System related commands: provide query system tasks, kernel semaphores, system software timers, CPU usage, current
Network related commands: support querying the IP of other devices connected to the development board, querying the local IP, testing network connection, setting
Set the AP and station mode of the development board and other related functions.
When using the Shell function, you need to pay attention to the following points: The Shell function supports the use of the exec command to run executable files. The Shell
function supports English input in default mode. If the user enters Chinese characters in UTF-8 format, they can only be deleted by going back three times. The Shell function supports
Tab key associative completion of shell commands, file names, and directory names. If there are multiple matches, print multiple matches based on common characters. For too many
matches (printing more than 24 lines), a print query (Display all num possibilities? (y/n)) will be performed. The user can enter y to select all printing, or enter n to exit printing, select all
printing and After printing more than 24 lines, a --More-- prompt will appear. At this time, press the Enter key to continue printing, and press the q key to exit (Ctrl+c exit is supported).
The shell-side working directory is separate from the system working directory, that is, the shell-side working directory is operated through commands such as cd pwd on the shell
side, and the system working directory is operated through commands such as chdir getcwd. The two working directories have no relationship with each other. connect. Pay special
attention when the file system operation command input parameter is a relative path. Before using the network Shell command, you need to call the tcpip_init function to complete
the network initialization and complete the telnet connection before it can take effect. The kernel does not initialize tcpip_init by default. Generally, it is not recommended to use Shell
commands to operate device files in the /dev directory, as this may cause unpredictable results.
The typical development process for adding new Shell commands is as follows:
#include "shell.h"
#include "shcmd.h"
Register command. Users can choose static registration command mode and dynamic registration command mode when the system is running. Static registration command mode
The command mode is generally used to register common system commands, and the dynamic registration command mode is generally used to register user commands.
Static registration command mode is registered through macro. The prototype of this macro is: SHELLCMD_ENTRY(l,
SHELLCMD_ENTRY has several parameters, l is the name of a statically registered global variable (note: it does not have the same name as other
symbols in the system). cmdType is the command type CMD_TYPE_EX. It does not support standard command parameter input, and the command keywords filled
in by the user will be blocked. For example, if you enter ls /ramfs, the parameters passed to the registered function are only /ramfs, and the ls command keyword
will not be blocked. incoming. CMD_TYPE_STD supports standard command parameter input, and all input characters will be passed in after being parsed by the
command. cmdKey is the command keyword, the name of the function accessed in the Shell. paraNum is the maximum number of input parameters of the called
execution function, and the default value is XARGS (0xFFFFFFFF). cmdHook command execution function address, that is, the actual execution function of
Add corresponding options in build/mk/liteos_tables_ldflags.mk, such as: when registering the above "ls" command, you need to
Dynamic registration command mode, complete registration by registering function prototype UINT32 osCmdReg (CmdT ype cmdType, CHAR *cmdKey,
UINT32 paraNum, CmdCallBackFunc cmdProc). Detailed explanation of UINT32 osCmdReg parameters, cmdType is the command type, CMD_TYPE_EX: does
not support standard command parameter input, and will block the command keywords filled in by the user. For example: enter ls /ramfs, the parameters passed to
the registered function are only /ramfs, and ls Command keywords are not passed in. CMD_TYPE_STD: Supported standard command parameter input, all entered
characters will be passed in after being parsed by the command. cmdKey command keyword, the name of the function accessed in the Shell. paraNum is the
maximum number of input parameters for the execution function called. This parameter is not supported yet; it is currently the default value XARGS (0xFFFFFFFF).
cmdHook command execution function address, that is, the actual execution function of the command. For example: osCmdReg(CMD_TYPE_EX, "ls", XARGS,
(CMD_CBK_FUNC)osShellCmdLs).
It should be noted that the command keyword must be unique, that is, two different command items cannot have the same command keyword, otherwise
only one of them will be executed. When the Shell executes a user command, if there are multiple commands with the same command keyword, only the one
ranked first in the "help" command will be executed. Added built-in command function
prototype, UINT32 osShellCmdLs(UINT32 argc, CHAR **argv). osShellCmdLs parameter description, argc is the number of parameters in
the Shell command. argv is an array of pointers, each element points to a string. You can decide whether to pass the command keyword to the registered function
according to the selected command type. There are two ways to enter the Shell command: directly enter the Shell command in the serial port tool or enter the Shell
#include "shell.h"
#include "shcmd.h"
int cmd_test(void) {
printf("hello everybody!\n");
return 0;
}
Add the link to the new command item parameters in the link options:
Use the help command to view all registered commands in the current system. You can find that the test command has been registered. (The following command set
The combination is for reference only, and the actual compilation and running conditions shall prevail. )
OHOS #help
*******************shell commands:******************************
There are two main steps for dynamically registering Shell programming examples. First, use the osCmdReg function to add new command items, and then
Call the osCmdReg function in the user application function to dynamically register the command.
#include "shell.h"
#include "shcmd.h"
int cmd_test(void) {
} void app_init(void) {
....
....
Use the help command to view all registered commands in the current system. You can find that the test command has been registered.
OHOS #help
Machine Translated by Google
*******************shell commands:******************************
cpup, used to query the cpu occupancy rate, command format: cpup [mode] [taskID].
OHOS #cpup 1 5
pid 5 CpuUsage in ls: 0.1
date, used to query and set the system date and time.
exec helloworld
kill, sends a specific signal to the specified process. Command format: kill [signo / -signo] [pid].
log, used to modify & query log configuration. Command format: log level [levelNum].
Machine Translated by Google
memcheck, checks whether the dynamically applied memory block is complete and whether there is memory out of bounds causing node damage.
OHOS #memcheck
system memcheck over, all passed!
chmod is used to modify file operation permissions. Command format: chmod [mode] [pathname]. cp is used to copy files. Command format: cp
ls is used to display directory contents. The command format is: ls [path]. If path is empty, it means the current directory.
The mkdir command is used to create a directory. The command format is: mkdir [directory].
The mount command is used to mount the device to the specified directory. For example:
The pwd command is used to display the current path, for example:
OHOS # pwd /
bin/vs
The rm command is used to delete files or folders. The command format is: rm [-r] [dirname / filename], -r is an optional parameter.
rmdir can only be used to delete directories, and only one empty directory can be deleted in one operation, for example:
The touch command is used to create a non-existent empty file in the specified directory. If an existing file is operated, the timestamp will not be updated. Command format:
touch [filename].
Machine Translated by Google
arp is used to query the ARP buffer, which stores the correspondence table between IP and MAC, for example:
dhclient is used to set and view the parameters of dhclient, and dns is used to view and set the dns server address of the single board. For example:
netstat is used to check the port information of this device. For example:
7 HarmonyOS porting
In the era of the Internet of Things, devices suitable for different scenarios can be based on various chip architectures, and developers need to use a
A unified operating system that maintains a consistent application framework and distributed protocols at the system level across all devices.
HarmonyOS can run on processors and development boards of different architectures. In order to make the operating system run on a specific target device, the
operating system writer usually cannot complete the entire operating system code at once, but must retain a part of the code related to the specific hardware device as an
abstract interface, so that Only then can the portability of the entire operating system be guaranteed.
As more and more devices have intelligent functions, such as electronic scales, smart air conditioners, and smart speakers, they can all be connected to the local
area network. The number of devices and connections has increased significantly compared with previous years. This means that As more similar application functions
will be used on various devices, the important significance of transplantation is to reduce the adaptation cost of development.
The following figure is a comparison of the startup process of the Windows operating system and HarmonyOS. From the startup process point of view, the startup
process of the two processes is roughly similar, which is also the basis for software portability.
Machine Translated by Google
The operating system transplantation process specifically includes the following steps:
The first is environment preparation. Before transplanting the system, a series of preparations need to be carried out, including downloading source code,
establishing a cross-compilation environment, etc. Then carry out BootLoader transplantation. After completing the environment preparation, the first program that
runs after power-on, that is, the bootloader BootLoader needs to be transplanted. Next, compile the kernel. After completing the transplantation of BootLoader,
you need to configure and compile the operating system kernel, and make certain modifications to the source code if necessary. Finally, make the root file system,
make the root file system, and put files such as programs, libraries, configuration files, user data, and drivers (the last two options are optional) into the root file
Common software development belongs to local compilation: under the current PC, under the x86 CPU, the program is directly compiled, and the program
(or library file) that can be run can be directly compiled under the current environment, that is, the x86 CPU , currently running on the computer. The compilation
at this time can be called local compilation, that is, under the current target platform, the compiled program can be run only under the current platform. The process
Cross-compilation is a concept corresponding to local compilation. The so-called cross-compilation is: compiled on one platform, the compiled program
is run on another platform and is compiled in an environment. It is different from the running environment and is cross-compiled. The concept of cross-compilation
is mainly related to embedding. related to development. The cross-compilation process is shown in the figure below.
Machine Translated by Google
The main reason for cross-compilation is that embedded systems have fewer resources and cannot support local compilation. The target environment
in which the cross-compiled program is to be run has relatively limited resources, so it is difficult to directly compile it locally.
The tool chain used to achieve the purpose of cross-compilation and generate executable programs or library files is a cross-tool.
chain. In order to achieve this goal, the internal execution process and logic mainly include two aspects: compilation and linking.
The complete compilation process of the program is shown in the figure below:
Machine Translated by Google
The complete process of compilation starts from the .c source file, which is preprocessed to generate the .i preprocessing file, and then compiled to
generate the .S assembly file. After assembly, the .o object file is generated, and multiple .o files are linked to generate the .elf executable file, the executable file
can be selected to be disassembled to generate a .dis file to view the specific details of the compilation process.
The process of a .c/.cpp file from a source file to a target file is called compilation, but there cannot be only one file in a project. This involves the compilation
of multiple files. A certain file must be involved in the compilation process. Compile first, then compile a certain file. The build process is to arrange the compilation
sequence of files. Make is a build tool and belongs to the GNU project. When the Make command is executed, a makefile is required to tell the make
command how to compile and link the program. Execute the make command, and make will automatically find the Makefile file and execute it. This saves you the
trouble of manually compiling files step by step. Although Make and Makefile simplify the manual building process, writing Makefile files is still a cumbersome
task, so there is the CMake tool. The CMake tool is used to generate Makefile files, and how to generate Makefile files is specified by the CMakeLists.txt file. The
Essentially a library is a binary form of executable code that can be linked directly by the compiler at compile time
into the executable program, or it can be dynamically loaded into the memory by the operating environment of the operating system as needed during runtime.
A set of libraries forms a release package. Of course, the specific number of libraries to be released is entirely decided by the library provider. In reality, every
program depends on many basic underlying libraries. It is impossible for everyone to start their code from scratch, so the existence of libraries is of extraordinary
significance. For example, in C language, the C standard library is a set of C built-in functions, constants and header files, such as <stdio.h>,
<stdlib.h>, <math.h> and so on. This standard library serves as a reference manual for C programmers.
By modifying the tool chain, cross-compile the third-party library to generate the executable file of the HarmonyOS platform, and finally add it to the
CMake method can be used to cross-compile by specifying the tool chain, modify and compile the library, and generate the HarmonyOS platform
executable file; then build the HarmonyOS environment and execute the test cases to complete the test; finally add the library to the project, copy the library
that has been successfully cross-compiled to the third_party directory of HarmonyOS, in order not to modify the BUILD.gn in the third-party library directory to be
transplanted file, then add a layer of directory to place the new gn to CMake compilation adaptation file, and finally build and compile.
The library transplantation idea of the Makefile method is similar to that of CMake. The only difference is that the CMake method sets the tool
chain by modifying CMakeLists.txt, while the Makefile method requires manually modifying the Makefile file to set the tool chain. In addition, the adaptation files
BUILD.gn and condig.gni created when adding the library to the project are completely the same except that they are different from the CMake method. First set
up the cross-compilation tool chain of the Makefile, modify and compile the library, and generate
Machine Translated by Google
The executable file of the HarmonyOS platform is then tested. The test steps of the yxml library are basically the same as those of the double-
conversion library. Build the HarmonyOS environment and execute the test cases to complete the test. Finally, add the library to the project and
copy the library that has been successfully cross-compiled to HarmonyOS. third_party directory, in order not to modify the BUILD.gn file in the third-
party library directory to be transplanted, add another layer of directory to place the new gn to CMake compilation adaptation file, and finally build
and compile.
Machine Translated by Google
Application Programming
API application programming interface
Interface
SCHED_FIFO Schedule First Input First Output First in first out scheduling
Universal Asynchronous
UART Universal Asynchronous Receiver and Transmitter
Receiver/Transmitter
SDIO Secure Digital Input and Output Secure Digital Input and Output Interface
artificial intelligence
Artificial Intelligence AI
UDID Unique Device Identifier The unique device identifier of the device
BIOS Basic Input Output System basic input and output system
GNU GNU's Not Unix! Content software is fully licensed under the General Public License