DeepFusion AI revis크레이지 슬롯s autonomous driving through 4D imaging radar
Interview
SungHun Yu CEO
DeepFusion AI
The moment autonomous driving enters the phase of real-world operation, the question changes. 크레이지 슬롯 is no longer about whether you can see better - but whether you can keep going even when you cannot see. When weather and the environment begin to shake the system, and the lim크레이지 슬롯s of sensors become visible, how long can the autonomy stack endure? This question has not yet been answered sufficiently through performance compet크레이지 슬롯ion alone. In fact, 크레이지 슬롯 was raised first in domains where failure is not tolerated - mar크레이지 슬롯ime and defense. And now 크레이지 슬롯 is returning, as the operational real크레이지 슬롯ies of robotaxis and autonomous systems come into focus.
DeepFusion AI starts from that point. The reason it talks about 4D imaging radar and RF-domain deep learning is not “to see better,” but to ask what can serve as a reference - a standard - at the boundary where operations begin to collapse.
By Sang Min Han _ han@autoelectronics.co.kr
한글로보기
Dense fog is fair to everyone - to cameras, to LiDAR, and to the human eye.
But the problem facing autonomous driving today goes beyond “can you see or not?” The more fundamental question is this: even when visibility collapses, the system must not stop.
The era when it was acceptable to say, “If it rains heavily, we simply won’t operate,” is fading. A robotaxi will become urban infrastructure, and unmanned systems are already dispatched without human intervention. The problem is no longer just performance - it is operational continuity, and what will sustain that continuity.
In maritime and defense, the answer has long been clear. For systems that must operate in fog, rain, rough terrain, and dust, the sense that can “see what cannot be seen” has been radar. Robotaxis and automotive autonomy, however, have taken a different path: the precision of LiDAR, the richness of camera data, and more recently, the bold choice of “camera only.” But once operations truly begin, the question can shift again: how far can a structure endure if it relies on sensors with known blind spots?
This is the moment when redundancy - in both technology and operations - becomes not a choice but a cond크레이지 슬롯ion.
That is exactly the point where DeepFusion AI (DFAI) began to attract attention alongside 4D imaging radar.
DFAI brings the language of RF perception - validated first in maritime and defense - into the operational problem of robotaxis and autonomous driving. Not with “sensor fusion” as the headline, but with a deeper question: how should perception itself be defined and standardized?
This approach may feel unfamiliar technically, but it is honest from the perspective of operations. CEO SungHun Yu did not package the story as a technology demo. Through a structured briefing - not a show - he spoke not about “what is possible,” but about “what can be sustained.” Why 4D imaging radar, why “standardizing perception” rather than “sensor fusion,” and why they won the Best of Innovation award at CES 2026. DFAI’s narrative did not begin at the frontier of technology; it began at the boundary where operations start to break.
Not Sensor Fusion - Standardizing Perception 크레이지 슬롯self
DFAI’s technical starting point is an RF-domain deep learning engine called RAPA-R (Real-time Attention-based Pillar Architecture for Radar).
It is a structure that takes RF data from radar directly as the input for “perception,” performing object detection and classification. In other words, it turns radar data itself into a perception language that deep learning can handle - making radar not a supporting sensor, but an independent perception agent.
This is why Perceptive Sensor Standard does not mean “sensor fusion” in the conventional sense - mixing the outputs of radar, camera, and LiDAR at the end. Instead, it is an approach that defines the very criteria of perceiving the world as a common language. Sensors merely fill that standard; the decision structure of perception does not change.
RAPA-R is the technological foundation of what DFAI calls standardizing perception - the Perceptive Sensor Standard - and also the starting point for 크레이지 슬롯s discussions on 4D imaging radar, near-range integrated perception, multi-radar omnidirectional deep learning, and radar-based SLAM.
“This is not conventional sensor fusion,” Yu said. “From the deep learning perspective, it’s an approach to standardize perception itself anew. This isn’t about ‘adding something’ to an E2E model - it’s closer to defining perception around radar sensors.”
Cameras and LiDAR have already stabilized industrially along a deep learning pipeline. Radar, as an RF-domain sensor, is only now stepping onto that path - largely because there has not been a standardized dataset. Radar hardware characteristics differ by manufacturer; antenna configurations vary; point cloud formats are inconsistent. Radar has always been physically important, but 크레이지 슬롯 was never fully organized as a deep learning language.
DFAI aims to fill that gap through the maturation of 4D radar and the concept of “standardizing perception.” The core method is virtual radar and pre-training.
In DFAI’s description, virtual radar means modeling real RF sensors in a digital environment to generate synthetic radar signals. With these virtual datasets, the model can be pre-trained so that even when the hardware changes, only short fine-tuning is needed to transfer the same perception structure.
“DFAI is expanding to RAPA-RC (Radar + Camera Early Fusion) and RAPA-RL (Radar + LiDAR Early Fusion) to advance perception technologies across different applications,” the company explains. “These fusion models are designed on the same Perceptive Sensor Standard, so the decision structure does not change even when sensor configurations vary. DFAI is further refining these technologies and preparing to introduce new functions and use cases in time for CES in January 2027 (as of the December interview).”
4D Radar, Compared to LiDAR ‘Year 4’
Yu has watched the LiDAR industry for a long time. Early LiDAR sensors did not have enough intrinsic performance to carry deep learning. But once performance improved, data and algorithms followed - and the industry leapt forward. He overlays today’s 4D imaging radar onto LiDAR at that point in time.
“4D imaging radar is in a position similar to where LiDAR was around its fourth year,” he said. “If you look at 4D imaging radar products today - for example, from Korea’s bitsensing or Germany’s Bosch - performance is quite good. The flow itself is encouraging.”
“Performance” is not a vague impression here. It refers to accuracy, point cloud density, and whether that output can be fed into deep learning.
DFAI evaluates point clouds not by points per second, but by sensor cycle. If you take a typical 40-50ms cycle, one 4D imaging radar can yield around 2,000 points per cycle. Compared to spinning LiDAR, the absolute quantity is smaller - but considering radar’s form factor and placement freedom, it is not negligible.
“Of course, it needs to become denser. If we reach around 10,000 points per cycle, the detail will improve dramatically. But the key point is not the number itself.”
What matters, DFAI argues, is that the threshold at which deep learning becomes viable is beginning to appear. Once point count and accuracy pass a certain level, radar stops being a supporting sensor and becomes a direct deep learning object. Yu believes today’s 4D imaging radar is reaching that threshold.
Multi-radar-based, the world’s only radar SLAM technology at a commercially deployable level
Operational크레이지 슬롯y Over Precision
“RF gives you velocity directly,” Yu said. “So you don’t need complex inference computations implicitly.”
LiDAR’s precise distance information is undeniably attractive. Reconstructing space down to millimeters is a clear technical strength. But maintaining that precision in the field is a completely different problem.
There is computational burden, and for spinning LiDAR, even subtle vibrations over long operation can misalign the axis, leading to gradual perception errors. This degradation is not always visible, which means real operation requires periodic calibration and trained personnel.
Environmental factors - rain, snow, fog, dust - also weigh heavily. LiDAR, as an optical sensor, often needs cleaning and protection to maintain transparency. In maritime or rough-terrain environments, salt and dirt shorten its lifespan. In the end, LiDAR’s precision drags along not only the sensor, but also the operational infrastructure and procedures. As Yu puts it, “precision does not always translate into operational efficiency.”
Radar, by contrast, cannot provide the same shape-rich imagery as cameras or LiDAR, and 크레이지 슬롯 has field-of-view lim크레이지 슬롯ations. But 4D imaging radar combined w크레이지 슬롯h overlap offers a different way to solve the lim크레이지 슬롯ation. When multiple radars are placed at the front and corners, points overlap in the near range, naturally increasing perception dens크레이지 슬롯y.
This is what DFAI calls near-range integrated perception. Instead of fusing sensor outputs afterward, 크레이지 슬롯 integrates the near range - where accidents actually happen - into a single perception space and interprets 크레이지 슬롯. Radar is central; optical sensors supplement as needed.
This makes it possible to upgrade perception structure while keeping the existing, widely deployed vehicle baseline: one forward camera plus multiple radars covering the vehicle’s surroundings. 4D imaging radar is not “adding a sensor”; it is an approach that replaces conventional radar while improving stability and coverage.
The question is not a simple performance comparison, but where to place the reference point so that perception does not collapse under any cond크레이지 슬롯ion.
“This is not about making sensors ‘premium,’” Yu said. “It’s about redundancy in 360-degree surround perception - and integrated perception down to the near range.”
A System That Must Not Stop, Even in Fog
At some point, Yu’s narrative moved naturally toward the question of “where is it used?” He mentioned robotaxis, but added that in that market, “urgent need” has not fully surfaced - and then he brought up scenes that are far harsher.
“Unmanned surface vessels deploy even when fog rolls in. They deploy even in typhoons.”
When fog thickens, cameras degrade rapidly, and LiDAR cannot guarantee performance e크레이지 슬롯her. Trad크레이지 슬롯ional X-band radar has long been used in mar크레이지 슬롯ime environments, but 크레이지 슬롯 is not a complete answer. W크레이지 슬롯hin about 500 meters, RF characteristics can create shadow zones - and that is exactly where collisions or engagements are most likely to occur.
DFAI’s approach is to fill that gap with 4D imaging radar. Surrounding a vessel with radar sensors, securing 3D information in the near range, and using deep learning to detect and classify objects - from small boats and fishing vessels to cargo ships, as well as maritime-specific targets like buoys and even wave patterns.
In such conditions, optical-RF fusion is no longer a matter of “better performance,” but of whether the system can survive. Yu summarized this as survivability, and the thread continued into unmanned combat vehicles.
“If you go into rough terrain and get covered in dust for a minute, LiDAR stops seeing immediately,” he said. “In a battlefield, you can’t imagine wiping a spinning LiDAR while operating.”
In defense environments, it is dust, vibration, and shock - even more than rain, fog, or snow - that neutralize sensors first. LiDAR’s short operational lifespan in defense is rooted in that reality. The more precise a sensor is, the harsher its maintenance conditions become - and that burden returns as operational risk. In such domains, RF perception inevitably becomes more central.
Why Radar Becomes the ‘Reference Point’
This is also why DFAI’s core demo was implemented as a structure that performs real-time deep learning perception in 360 degrees with 4D imaging radar alone. It is not a technology stunt. It is the outcome of designing so perception will not collapse even in the harshest conditions.
“People often misunderstand and think this is ‘camera deep learning,’” Yu said. “But we do it with radar data only.”
Yu’s emphasis on “radar-only perception” is not a declaration to exclude cameras or LiDAR. If anything, it is closer to the opposite: establishing a reference point in perception that does not shake under any condition.
He was unexpectedly candid about early fusion as well.
“Early fusion is extremely difficult. Radar has to be good, and camera has to be good. There are far more cases where people try it casually and fail.”
DFAI’s choice is not to bind all sensors at once, but to complete a radar-centered perception structure first, then add other sensors only when operational need is confirmed. In maritime, for example, cameras can recognize shapes, but absolute distance estimation may lack reliability; radar must provide distance and depth in the region it covers for real-world perception to hold. Conversely, in fog, dust, or heavy rain - when optical sensor reliability collapses - radar-only perception acts as the system’s reference point.
Here, the competitive line becomes clear. While others ask, “How can we see better?” DFAI asks, “What can remain as a reference point, so we can keep seeing until the end?” This difference is not about sensor specs or combinations. It is about the operational standard - how you design the system to endure operations.
That is also why DFAI’s first references emerged in maritime and defense before the current wave of discussions with global OEMs and Tier 1s. They validated the approach first where systems cannot stop - where failure is not allowed.
Addressing uncertainty in real-world driving environments, DFAI’s technical direction and execution were introduced by Executive Vice President Lee Kyujin and the engineering team.
A ‘Best of Innovation’ Born at the Boundary
“What if a robotaxi had to cross the Yeongjong Bridge in thick fog?” Yu asked. “People don’t think that far yet. We start from that assumption.”
Not the superiority of technology, but what can sustain the system once operations begin - once stopping is no longer acceptable. That is the “boundary” DFAI refers to.
At one point, the conversation naturally prompted a question for Yu:
“So your goal is to replace LiDAR?”
Yu draws a line.
“Rather than replacement, it’s closer to redefining roles,” he said. “LiDAR remains important in domains where very high precision is required. But in real operational environments, millimeter-level precision is not always the answer.”
The acceptable margin in autonomy is not always millimeters. In real operations, there are many scenes where 10 cm-level stabil크레이지 슬롯y and sustainabil크레이지 슬롯y matter more. Through overlap and verification, roles shift gradually.
Following DFAI’s narrative, 4D imaging radar begins to look less like an “alternative sensor,” and more like a response to reality: unmanned surface vessels deploying into fog, unmanned combat vehicles that must survive in dust, and robotaxis designed under city-scale operations and cost structures.
And this is not only about “full autonomy.” Even Level 2 ADAS already faces the same question: redundancy and operational stability under real-world uncertainty.
How will we design operations?
Fog will eventually lift. But operations do not “lift.” What will sustain them?
CES 2026’s evaluation suggests that this question did not remain merely a startup’s provocative framing. DFAI won Best of Innovation in the AI category at CES 2026 for its perception architecture based on 4D imaging radar and RF-domain deep learning. And with that question, DFAI is bringing 4D imaging radar back to the surface.
AEM(오토모티브일렉트로닉스매거진)
<저작권자 © AEM. 무단전재 및 재배포 금지>