Many studies have discussed the implications of using a training process to develop artificial intelligence: the significant computing capabilities required, the energy wasted, the high cost, the time required for training, the size of the dataset needed. However, the fact that automated driving is considered safer than manual driving proves that the training process is effective at the task of transferring the manual capabilities required for driving a car to artificial models. But is this enough? In this talk, we discuss how attackers can exploit two characteristics that are not transferred to artificial models during the learning process: judgment and reasonability. To demonstrate our claims, we focus on computer vision object detectors. We explore the nature of this problem in the digital domain and show that object detectors are essentially feature matchers that misclassify an unreasonable object, because its color/shape matches the objects in the training dataset. Based on our findings, we present a new attack against object detectors: the ghost misdetection attack. We present videos demonstrating how attackers can attack commercial real-time object detectors that are integrated into Mobileye 630 and Tesla Model X in three remote attack scenarios, causing the cars to: (1) automatically stop in the middle of the road due to a ghost that appears for a few milliseconds in a compromised McDonald’s advertisement, (2) issue notifications about road signs due to a printed advertisement of monthly winning lottery numbers, and (3) decrease its speed due to a projected ghost of a person on the road. To mitigate ghost detections, we present ghostbusters, a novel method aimed at endowing computer vision object detectors with characteristics that are not transferred during the learning process. We analyze ghostbusters’ performance and show that this mitigation compensates for the judgement lacking in object detectors by decreasing the success rate of the ghost misdetection attack from 99.7-81.2\% to 0.01\%.