You are on page 1of 3
reflections to see behind obstacles, and do not need a set line of sight since radio waves are reflective. Another type of sensor used in autonomous vehicles are cameras. These are widely used by ‘car manufacturers, because they are very efficient at testure classification and interpretation, and they are broadly available and more affordable than RADAR or LiDAR sensors. They are used in high-end vehicles for collision avoidance, lane-keeping assistance and in-vehicle traffic sign recognition. They are also able to obtain a detailed representation of the scenery including the exact position of buildings, vegetation, other road participants and further obstacles. In this way, the autonomous vehicle is able to self-localize and navigate through traffic. [Koc18] stated that “The latest high-definition cameras can produce millions of pixels per fiame, with 30 to 60 frames per second, to develop intricate imaging. This leads to multi- megabytes of data needed to be processed in real-time” which means that on the other hand, there isa need for high computation power for processing data from cameras which is a main disadvantage in using it for autonomous vehicles. As an example of a sensor fitsion that could need a lot of processing power due to usage of ‘many cameras is the “autopilot system”, which is a sensor concept used by Tesla. It consists six cameras, mounted on the vehicle, including a 250m maximum distance for the narrow forward camera, a 150m maximum distance for main forward camera, a 60m maximum wide forward camera, a 100m maximum distance rearward looking side camera, a 50m maximum distance rear view camera and a 80m maximum distance forward looking side camera respectively. Furthermore, it includes an 8m maximum distance ultrasonic and a 160m ‘maximum distance RADAR which are mounted respectively [Tes20]. Another sensor concept used on autonomous vehicles is developed by APTIV. Their autonomous vehicle has a sensor concept that includes six electronically scanning RADARs (ESR), four short-range RADARs (SRR), four short-range LDARs, five long-range LiDARs, one trifocal light camera, and one traffic light camera [Apt20] this sensor concepts is on a high cost end due to the sensors used. ‘As well there is the Waymo sensor concept, for which they customised LiDARs, a vision system and RADARS, With three LiDAR sensors located at the side, back and down part of the Vehicle. The customised vision system comprises of eight vision modules which enables 360-degree vision and a customised RADAR mounted on top of the Vehicle [Teal 7]. On one 2 hand the customised Sensors are producing better results in detection, but on the other hand itis still an expensive option. In the above mentioned sensor concepts, redundancy was used in the sense of more than one sensor fulfilling one purpose and diversity in the sense of using different technologies in order to avoid the unacceptable risk that could arise from using only one kind of sensor. But according automotive functional safety standard, diversity means using “different solutions satisfring the same requirement, with the goal of achieving independence” [ISO26269] which means inhomogeneous redundancy should be implemented so that the system gets the same information from two or more sensors to achieve better and safer result, Using different technology is also important in order to avoid common cause failures. The aforementioned sensor concepts use sensor fusion to combine data from different sensors to produce better result, which comes with its own challenges like problems in path planning and obstacle avoidance [Koc18]. In the Tesla sensor concept, redundancy has only been adopted with some of the cameras, but not with the RADAR sensors, which are playing a major role in detecting objects in bad weather conditions. In the case of the sensor set of APTIV, different RADARs and LiDARs are used redundantly but like mentioned before, itis very expensive and there is stil a safety problem with that sensor concept implementation, because a failure of any of the used sensors could still lead to an accident, due to the positioning of the sensors and the redundancy, which does not cover all the distances. Each of those sensor concepts have their weak spots and there would be a need for another sensor concept with different positioning ‘which will be implemented redundantly in a less expensive way. Another option would be to ‘create another sensor which will surpass the currently used ones. Many researchers provide different solutions on how a sensor concept can be implemented. For the harcwvare part, [Hanky19] highlights the need for the sensors to be calibrated in order to achieve precise, accurate distance and speed of detection. A method called surround sensors was proposed by Jamiol17] whereby the vehicle is surrounded by diverse sensors that are meeting different requirements and are working reliable even in unfavourable scenarios With the front sensors, the vehicle should be able to detect the object in front of itearly enough to avoid uncomfortable braking manoeuvre. The side sensors should be able to detect for instance, if the lanes are free, or if there could be a danger in changing of lane. The rear-view sensor should be able to detect coming objects from behind, but fast moving object with speed 3 of about 250km/h will be a challenge for the rear-view sensor [Jamioll7]. Another similar solution from Jfos17] stated that a single perception sensor is inadequate in order to ensure safety. Therefore, multiple sensor of different technologies, which would accomplish redundancy and diversity are proposed, meaning that only a single sensor cannot detect parallel scenes and scenarios happening on the street, that is why sensor combinations are required to reduce the shortcoming of an individual sensor. Although there are already solutions and sensor concepts implementations, they are not guarantying the safety of a full autonomous Vehicle in all circumstances yet. The possible inability of some sensors to detect vehicles or objects ahead early could lead to uncomfortable braking or worse still an accident. More challenges resulting from the use of sensors were highlighted by [HIais]6]. He stated that while some sensors have problems detecting larger objects others can’t detect smaller objects with different sizes and shapes. That itis happening a lot in the use RADAR sensors. Those sensors are used in every sensor concept mentioned above. [Hais16] also highlights the problem of autonomous vehicles not being able to detect the difference between an animal anda child on a brown bobbycar [Hais1 6]. In such situation, the use of LiDAR sensor could bbe used as redundant to compensate for the weakness. ‘The amount of redundancy and diversity required according to [ISO 26262] and [EN50126] to achieve a safe operation of autonomous vehicles need to be determined. More so, the determination of the most suitable positioning of sensors in an autonomous vehicle is needed in order to achieve high degree of coverage and redundancy. Therefore, a new sensor concept is necessary. ‘The following research questions are the basis for the master thesis: 1. According to which Aspect can redundancies in autonomous driving (AD) be assessed? 2. How much redundancy and diversity is required for autonomous driving (AD)? 3, What does an autonomous vehicle need to fulfil the standards of ISO 26262 and EN50126? 4. What angular positions are necessary to compensate for redundancy gaps? 5. How stable are the chosen sensor combinations under different external influences? (e.g. weather, lightning conditions, and vibrations)?

You might also like