Four weeks. That's how long I'd had my brand new, all-electric 2026 Toyota bZ when a driver struck my vehicle on I-405, hitting the side of the car where my toddler and my elderly mother were sitting. A vehicle equipped with dozens of sensors, cameras, and onboard AI systems that monitor everything from lane positioning to braking patterns in real time. My car knew exactly what happened. Every input, every output, every millisecond of data leading up to and through the impact.
I can't access any of it.
When I contacted Toyota about retrieving my vehicle's Event Data Recorder and driving data, I hit a wall that had nothing to do with technology and everything to do with policy. The car collected the data. The car used the data. But the person behind the wheel, the person whose driving generated that data in the first place, has no meaningful right to it under current U.S. law.
In Europe, this would be a different conversation. GDPR Article 15 gives individuals the right to access personal data collected about them. Article 20 gives them the right to receive it in a portable format. If my car knows everything about how I drive, European law says I have the right to see what it knows. U.S. law says almost nothing.
This isn't a niche automotive issue. It's the consumer rights question of the next decade. Every AI-enabled product we interact with, our cars, our phones, our home devices, our workplace tools, is collecting behavioral data, building models from it, and making decisions based on it. The gap between what these systems know and what they make available to the humans generating that data is growing wider, not narrower.
We talk a lot about AI transparency in this industry. Usually we mean model explainability or algorithmic bias. But there's a more fundamental layer: do you have the right to see what an AI system recorded about you? Can you access the data your own behavior generated? And if not, who does that data actually belong to?
My car knew everything and said nothing. That's not a technology problem. That's a design choice protected by a regulatory vacuum. And until we close that gap, every person interacting with an AI-enabled product is generating value they can't access, can't verify, and can't use in their own defense.
This is one of several edge cases I'm exploring in a book I'm developing on AI's unresolved boundaries:https://lnkd.in/diQ4zcww
I'd love to hear from my network. Have you ever been unable to access data that was yours? A vehicle, a medical device, a fitness tracker, a workplace tool that knew more about your behavior than you were allowed to see? Or do you have thoughts on where data privacy rights need to go from here? Drop your story or perspective in the comments or shoot me a DM, I would love to connect!