Skip to Content

Automakers are integrating “Video Edge” in the connected car to improve safety

Vijay Anand
2022-02-07
capgemini-engineering

The faster first responders arrive on the scene of a car accident, the better the chances of saving lives.

And if the car’s occupants can provide their exact location and injuries sustained to the first responders while on route, survival chances improve. (See Figure 1.)

This is just one of the exciting developments in the connected car market as automakers integrate video into safety systems for both drivers and passengers. Two early applications are pedestrian detection and collision warning that alert the driver to take evasive action to avoid an accident. In the case of an accident, the video recorded in the car network is transmitted in real-time to first-responders and used for further detailed analysis. Video-based safety is a critical emerging differentiator for the connected car.

High-definition video cameras in the connected car – e.g., forward, side, reverse, and 360-degree view – are becoming mainstream technologies for safety-related services. For example, the 360-degree view combines images from two or more cameras around the car for blind-spot monitoring. Also, by combining camera images, the driver can access a split-view on the main dashboard that includes video of right, left, and central views, lane monitoring, and other safety features.

The video-based safety features of the connected car identify and help avoid potholes and road wreckage. The car’s safety platform can alert nearby cars to avoid road hazards and accidents. Also, the connected car has intelligent cameras designed specifically to alert the owner if a stranger enters the vehicle or triggers a “welcome” message for approved drivers and passengers. With data captured from internal and external cameras, critical information such as pedestrian crossings, stop signals, traffic lights, and no parking signs can be presented in an augmented reality (AR) display panel providing the driver with more information.

Managing various safety applications and unique algorithms requires an e-cockpit platform powered by SoCs based on Nvidia or Qualcomm Snapdragon chipsets and a large screen display with a unique human-machine interface (HMI) dashboard for the driver and rear-seat screens for passengers. (See Figure 2.) The e-cockpit platform interfaces with the video distribution system to process the video data generated from multiple cameras. It provides connectivity to rear-seat passenger screens over an Ethernet Audio Video Bridge (AVB).

The Ethernet AVB is considered a potential standard by some automotive OEMs for the reliable delivery of media data in the car network. The IEEE 802.1 AVB offers a standards-based approach for highly reliable networked transmission by low-latency applications in the connected car. In addition, the IEEE 802.1 AVB supports time-sensitive networking requirements by forming a single network for driver assistance applications such as detection of driver drowsiness and other safety-critical functions within the car ecosystem.

Today, it is standard for many new car models to have three or four built-in cameras. Mid-range models typically provide one rear-view camera and up to four cameras to deliver 360-degree views around the car. Some high-end car models provide up to seven cameras, including night-vision feature support.

The video cameras used in the connected car are critical system components and enhance the safety of both drivers and passengers. The video cameras perform image recognition of particular objects in the drive and park mode, such as pedestrians and animals. They superimpose road information, such as speed limits, roadwork areas, no parking zones, etc., on the driver’s head-up screen with an AR display panel. In addition, artificial intelligence (AI) supported cameras positioned inside the car deliver safety and security benefits. For example, AI-enabled cameras increase the monitoring capability to protect against intruders stealing valuable assets, which gives drivers and passengers peace of mind.

Connected car video technologies used for entertainment and safety applications generate an enormous amount of data and network traffic. Video communications services top the list. However, essential video services, such as safety-orientated applications, require communications with very low latency and very high reliability. To meet the low-latency requirement, which is crucial for quick decision-making, the data generated from various internal and external cameras must be located in the car. The data does not have to travel to the cloud platform for video and image processing; it can be pre-processed and filtered in the car’s e-cockpit by truncating the unwanted video data. Ideally, the video data that gets processed at the edge (in this case, the car’s e-cockpit platform) provides the fastest response time and most efficient ways to process the data. In addition, it can manage data trimming with better security management and a limited cellular network burden.

The Advanced Driver Assistance System (ADAS) framework covering car safety applications resides in the e-cockpit platform and interfaces with Ethernet AVB cameras. (See Figure 3.) The ADAS middleware framework contains sophisticated algorithms that handle the safety features and provide information quickly to the driver to avoid a major accident. Adding artificial intelligence and machine learning at the edge of the connected car network to process video is an essential element in the e-cockpit platform that enables the connected-car ecosystem to detect problems at an early stage.

The growth of the connected car market is fueled by customer demand for robust safety-related features. Video is one technology that contributes to improved safety. The tools include high-definition cameras and video-image analysis enabled by the e-cockpit platform based on data from video cameras, Lidar, and radar. In addition, the next-generation e-cockpit platform makes decisions quickly because it processes data locally that is received in real-time from the car’s sensors. In the coming months and years, the in-car network will improve the overall safety of both drivers and passengers by avoiding accidents, reducing traffic congestion, and providing faster assistance from first respondents.

For an in-depth analysis of video-based safety features in the connected car, download the white paper “MAKING THE CASE FOR VIDEO EDGE IN THE CONNECTED CAR”

Author

Vijay Anand

Senior Director, Technology, and Chief IoT Architect, Capgemini Engineering
Vijay plays a strategic leadership role in building connected IoT solutions in many market segments, including consumer and industrial IoT. He has over 25 years of experience and has published 19 research papers, including IEEE award-winning articles. He is currently pursuing a Ph.D. at the Crescent Institute of Science and Technology, India.