An autonomous vehicle, also known as a self-driving car, is a vehicle capable of sensing its environment and operating without human involvement. It uses a variety of sensors, cameras, lidar, radar, AI, and machine learning algorithms to perceive its surroundings, make decisions, and navigate roads safely.
Researchers present a groundbreaking Bayesian learning framework, combining interval continuous-time Markov chain model checking, to verify autonomous robots in challenging conditions. Demonstrated on an underwater vehicle mission, the technique provides robust estimates for mission success, safety, and energy consumption, offering a scalable solution for diverse autonomous systems in uncertain environments.
Researchers from India, Australia, and Hungary introduce a robust model employing a cascade classifier and a vision transformer to detect potholes and traffic signs in challenging conditions on Indian roads. The algorithm, showcasing impressive accuracy and outperforming existing methods, holds promise for improving road safety, infrastructure maintenance, and integration with intelligent transport systems and autonomous vehicles
Researchers present ReAInet, a novel vision model aligning with human brain activity based on non-invasive EEG recordings. The model, derived from the CORnet-S architecture, demonstrates higher similarity to human brain representations, improving adversarial robustness and capturing individual variability, thereby paving the way for more brain-like artificial intelligence systems in computer vision.
Researchers present a groundbreaking Federated Learning (FL) model for passenger demand forecasting in Smart Cities, focusing on the context of Autonomous Taxis (ATs). The FL approach ensures data privacy by allowing ATs in different regions to collaboratively enhance their demand forecasting models without directly sharing sensitive passenger information. The proposed model outperforms traditional methods, showcasing superior accuracy while addressing privacy concerns in the era of smart and autonomous transportation systems.
Researchers from the University of Birmingham unveil a novel 3D edge detection technique using unsupervised learning and clustering. This method, offering automatic parameter selection, competitive performance, and robustness, proves invaluable across diverse applications, including robotics, augmented reality, medical imaging, automotive safety, architecture, and manufacturing, marking a significant leap in computer vision capabilities.
Researchers question the notion of artificial intelligence (AI) surpassing human thought. It critiques Max Tegmark's definition of intelligence, highlighting the differences in understanding, implementation of goals, and the crucial role of creativity. The discussion extends to philosophical implications, emphasizing the overlooked aspects of the body, brain lateralization, and the vital role of glia cells, ultimately contending that human thought's richness and complexity remain beyond current AI capabilities.
Researchers introduce a novel multi-task learning approach for recognizing low-resolution text in logistics, addressing challenges in the rapidly growing e-commerce sector. The proposed model, incorporating a super-resolution branch and attention-based decoding, outperforms existing methods, offering substantial accuracy improvements for handling distorted, low-resolution Chinese text.
Researchers introduced Swin-APT, a deep learning-based model for semantic segmentation and object detection in Intelligent Transportation Systems (ITSs). The model, incorporating a Swin-Transformer-based lightweight network and a multiscale adapter network, demonstrated superior performance in road segmentation and marking detection tasks, outperforming existing models on various datasets, including achieving a remarkable 91.2% mIoU on the BDD100K dataset.
This study introduces a sophisticated pedestrian detection algorithm enhancing the lightweight YOLOV5 model for autonomous vehicles. Integrating extensive kernel attention mechanisms, lightweight coordinate attention, and adaptive loss tuning, the algorithm tackles challenges like occlusion and positioning inaccuracies. Experimental results show a noticeable accuracy boost, especially for partially obstructed pedestrians, offering promising advancements for safer interactions between vehicles and pedestrians in complex urban environments.
Researchers propose a groundbreaking framework, PGL, for autonomous and programmable graph representation learning (PGL) in heterogeneous computing systems. Focused on optimizing program execution, especially in applications like autonomous vehicles and machine vision, PGL leverages machine learning to dynamically map software computations onto CPUs and GPUs.
This paper introduces FollowNet, a pioneering initiative addressing challenges in modeling car-following behavior. With a unified benchmark dataset consolidating over 80K car-following events from diverse public driving datasets, FollowNet sets a standard for evaluating and comparing car-following models, overcoming format inconsistencies in existing datasets.
This study delves into customer preferences for automated parcel delivery modes, including autonomous vehicles, drones, sidewalk robots, and bipedal robots, in the context of last-mile logistics. Using an Integrated Nested Choice and Correlated Latent Variable model, the research reveals that cost and time performance significantly influence the acceptability of technology, with a growing willingness to explore novel delivery automation when cost and time align.
Researchers explored the influence of stingy bots in improving human welfare within experimental sharing networks. They conducted online experiments involving artificial agents with varying allocation behaviors, finding that stingy bots, when strategically placed, could enhance collective welfare by enabling reciprocal exchanges between individuals.
Researchers explored the application of distributed learning, particularly Federated Learning (FL), for Internet of Things (IoT) services in the context of emerging 6G networks. They discussed the advantages and challenges of distributed learning in IoT domains, emphasizing its potential for enhancing IoT services while addressing privacy concerns and the need for ongoing research in areas such as security and communication efficiency.
This study introduces a novel approach to autonomous vehicle navigation by leveraging machine vision, machine learning, and artificial intelligence. The research demonstrates that it's possible for vehicles to navigate unmarked roads using economical webcam-based sensing systems and deep learning, offering practical insights into enhancing autonomous driving in real-world scenarios.
This research presents a novel bi-level programming model aimed at improving Transit Signal Priority (TSP) systems to reduce delays for private vehicles. By considering both public and private transportation, utilizing a game theory approach and genetic algorithms, the study offers a comprehensive solution for optimizing urban traffic flow.
Researchers present MGB-YOLO, an advanced deep learning model designed for real-time road manhole cover detection. Through a combination of MobileNet-V3, GAM, and BottleneckCSP, this model offers superior precision and computational efficiency compared to existing methods, with promising applications in traffic safety and infrastructure maintenance.
Researchers have developed a groundbreaking framework for training privacy-preserving models that anonymize license plates and faces captured on fisheye camera images used in autonomous vehicles. This innovation addresses growing data privacy concerns and ensures compliance with data protection regulations while improving the adaptability of models for fisheye data.
Researchers introduce NUMERLA, an algorithm that combines meta-reinforcement learning and symbolic logic-based constraints to enable real-time policy adjustments for self-driving cars while maintaining safety. Experiments in simulated urban driving scenarios demonstrate NUMERLA's ability to handle varying traffic conditions and unpredictable pedestrians, highlighting its potential to enhance the development of safe and adaptable autonomous vehicles.
Researchers present an AI-driven solution for autonomous cars, leveraging neural networks and computer vision algorithms to achieve successful autonomous driving in a simulated environment and real-world competition, marking a significant step toward safer and efficient self-driving technology.
Terms
While we only use edited and approved content for Azthena
answers, it may on occasions provide incorrect responses.
Please confirm any data provided with the related suppliers or
authors. We do not provide medical advice, if you search for
medical information you must always consult a medical
professional before acting on any information provided.
Your questions, but not your email details will be shared with
OpenAI and retained for 30 days in accordance with their
privacy principles.
Please do not ask questions that use sensitive or confidential
information.
Read the full Terms & Conditions.