DUBLIN–(BUSINESS WIRE)–The “End-to-end Autonomous Driving (E2E AD) Research Report, 2024” report has been added to ResearchAndMarkets.com’s offering.
End-to-end Autonomous Driving Research: status quo of End-to-end (E2E) autonomous driving
Status quo of end-to-end solutions in China
An end-to-end autonomous driving system refers to direct mapping from sensor data inputs (camera images, LiDAR, etc.) to control command outputs (steering, acceleration/deceleration, etc.). It first appeared in the ALVINN project in 1988. It uses cameras and laser rangefinders as input and a simple neural network to generate steering as output.
In early 2024, Tesla rolled out FSD V12.3, featuring an amazing intelligent driving level. The end-to-end autonomous driving solution garners widespread attention from OEMs and autonomous driving solution companies in China.
Compared with conventional multi-module solutions, the end-to-end autonomous driving solution integrates perception, prediction and planning into a single model, simplifying the solution structure. It can simulate human drivers making driving decisions directly according to visual inputs, effectively cope with long tail scenarios of modular solutions and improve the training efficiency and performance of models.
Actual Test of FSD V12.3
Li Auto’s end-to-end solution
Li Auto believes that a complete end-to-end model should cover the whole process of perception, tracking, prediction, decision and planning, and it is the optimal solution to achieve L3 autonomous driving. In 2023, Li Auto pushed AD Max3.0, with overall framework reflecting the end-to-end concept but still a gap with a complete end-to-end solution. In 2024, Li Auto is expected to promote the system to become a complete end-to-end solution.
Li Auto’s autonomous driving framework is shown below, consisting of two systems:
In the process of promoting the end-to-end solution, Li Auto plans to unify the planning/forecast model and the perception model, and accomplish the end-to-end Temporal Planner on the original basis to integrate parking with driving.
Data becomes the key to the implementation of end-to-end solutions.
The implementation of an end-to-end solution requires processes covering R&D team building, hardware facilities, data collection and processing, algorithm training and strategy customization, verification and evaluation, promotion and mass production. Some of the sore points in scenarios are as shown in the table:
The integrated training in end-to-end autonomous driving solutions requires massive data, so one of the difficulties it faces lies in data collection and processing.
DeepRoute
As of March 2024, DeepRoute.ai’s end-to-end autonomous driving solution has been designated by Great Wall Motor and involved in the cooperation with NVIDIA. It is expected to adapt to NVIDIA Thor in 2025. In the planning of DeepRoute.ai, the transition from the conventional solution to the ‘end-to-end’ autonomous driving solution will go through sensor pre-fusion, HD map removal, and integration of perception, decision and control.
GigaStudio
DriveDreamer, an autonomous driving model of GigaStudio, is capable of scenario generation, data generation, driving action prediction and so forth. In the scenario/data generation, it has two steps:
End-to-end solutions accelerate the application of embodied robots.
In addition to autonomous vehicles, embodied robots are another mainstream scenario of end-to-end solutions. From end-to-end autonomous driving to robots, it is necessary to build a more universal world model to adapt to more complex and diverse real application scenarios. The development framework of mainstream AGI (General Artificial Intelligence) is divided into two stages:
In the landing process of the world model, the construction of an end-to-end VLA (Vision-Language-Action) autonomous system has become a crucial link. VLA, as the basic foundation model of embodied AI, can seamlessly link 3D perception, reasoning and action to form a generative world model, which is built on the 3D-based large language model (LLM) and introduces a set of interactive markers to interact with the environment.
As of April 2024, some manufacturers of humanoid robots adopting end-to-end solutions are as follows:
For example, UdeerAI’s Large Physical Language Model (LPLM) is an end-to-end embodied AI solution that uses a self-labeling mechanism to improve the learning efficiency and quality of the model from unlabeled data, thereby deepening the understanding of the world and enhancing the robot’s generalization capabilities and environmental adaptability in cross-modal, cross-scene, and cross-industry scenarios.
LPLM abstracts the physical world and ensures that this kind of information is aligned with the abstract level of features in LLM. It explicitly models each entity in the physical world as a token, and encodes geometric, semantic, kinematic and intentional information.
In addition, LPLM adds 3D grounding to the encoding of natural language instructions, improving the accuracy of natural language to some extent. Its decoder can learn by constantly predicting the future, thus strengthening the ability of the model to learn from massive unlabeled data.
Key Topics Covered:
1. Foundation of End-to-end Autonomous Driving Technology
1.1 Terminology and Concept of End-to-end Autonomous Driving
1.2 Status Quo of End-to-end Autonomous Driving
1.3 Comparison among End-to-end E2E-AD Motion Planning Models
1.4 Comparison among End-to-end E2E-AD Models
1.5 Typical Cases of End-to-end Autonomous Driving E2E-AD Models
1.6 Embodied Language Models (ELMs)
2 Technology Roadmap and Development Trends of End-to-end Autonomous Driving
2.1 Scenario Difficulties
2.2 Development Trends
2.3 Universal World Model: Three Paradigms and System Construction of AGI
3 Application of End-to-end Autonomous Driving in the Field of Passenger Cars
3.1 Dynamics of Domestic End-to-end Autonomous Driving Companies
3.2 DeepRoute.ai
3.3 Haomo.AI
3.4 PhiGent Robotics
3.5 NIO
3.6 Xpeng
3.7 Li Auto
4 Application of End-to-end Autonomous Driving in the Field of Robots
4.1 Progress of End-to-end Technology for Humanoid Robots
4.2 Humanoid Robot
4.3 Zero Demonstration Autonomous Robot Open Source Model: O Model
4.4 Nvidia’s Project GR00T
5 How to Implement End-to-end Autonomous Driving Projects?
5.1 E2E-AD Project Implementation Case: Tesla
5.2 E2E-AD Project Implementation Case: Wayve
5.3 Team Building and Project Budget
5.4 Automotive E2E Autonomous Driving System Design
5.5 Cloud E2E Autonomous Driving System Design
For more information about this report visit https://www.researchandmarkets.com/r/yc9oyp
About ResearchAndMarkets.com
ResearchAndMarkets.com is the world’s leading source for international market research reports and market data. We provide you with the latest data on international and regional markets, key industries, the top companies, new products and the latest trends.
Contacts
ResearchAndMarkets.com
Laura Wood, Senior Press Manager
press@researchandmarkets.com
For E.S.T Office Hours Call 1-917-300-0470
For U.S./ CAN Toll Free Call 1-800-526-8630
For GMT Office Hours Call +353-1-416-8900
Vietnam is increasingly popular among Indian tourists, consistently topping reports and surveys as a favoured…
KUALA LUMPUR, MALAYSIA - Media OutReach Newswire - 24 December 2024 - For Octa, a…
ACCRA, GHANA - Media OutReach Newswire - 24 December 2024 - 1win, in partnership with…
KUALA LUMPUR, MALAYSIA - Media OutReach Newswire - 24 December 2024 - Shopee Malaysia recently…
MOSCOW, RUSSIA - Media OutReach Newswire - 24 December 2024 - Wildberries, a leading e-commerce…
HO CHI MINH CITY, VIETNAM - Media OutReach Newswire - 24 December 2024 - JustMarkets…