cover of episode EP 535: How AI Is Changing Personal Data and Privacy Forever

EP 535: How AI Is Changing Personal Data and Privacy Forever

2025/5/29
logo of podcast Everyday AI Podcast – An AI and ChatGPT Podcast

Everyday AI Podcast – An AI and ChatGPT Podcast

AI Deep Dive AI Chapters Transcript
People
J
Jordan
一位在摄影技术和设备方面有深入了解的播客主持人和摄影专家。
M
Michael Tiffany
Topics
Jordan:我认为消费者模型和AI技术变得非常强大和酷炫,这令人感到担忧。现在,即使没有太多的技术知识,人们也可以使用像Google Gemini Live这样的工具,让AI即时查看你的屏幕。微软的Copilot Vision可以看到你浏览的网页中你都看不到的部分。让AI看到你,以及将所有数据都交给AI,这既有巨大的潜力,也存在危险。无论你对此持何种态度,今天的对话都很重要,因为这代表了生成式AI的未来。生成式AI不仅仅是你安静地在家里或办公室使用的LLM,它是一种实时的技术,我们需要理解其中的力量和危险。我们的目标是帮助大家不仅跟上AI的发展,还能利用它在事业和公司中取得领先。 Michael Tiffany:我们 Fulcra Dynamics 建立了一个个人数据存储,用于存储你的生活产生的所有数据,例如可穿戴设备的数据。目标是将所有信息生成系统集中在一个地方,由你控制,以便查看、探索并连接到有用的AI代理。我们的客户主要分为两类:生物黑客和需要数据湖的人。生物黑客喜欢 Fulcra,因为它可以将所有数据集中在一个可视化的地方。另一类用户使用 Fulcra 作为数据湖,他们从各种系统收集数据,然后训练 AI 对其存储库执行功能调用,从而获得个人 AI。Fulcra 擅长处理那些不在文件中的数据,例如日历或心率等持续更新的数据流。我们的目标是整合来自各种来源的数据,为你提供一个统一的家。我想成为一个赛博格,并且认为在植入技术成为可能之前就可以实现。如果所有设备的数据都能统一在我控制之下,它们就能实时增强我的认知能力。在追求成为赛博格的过程中,我不能放弃安全视角,因为我们都有机会获得认知增强,但数据一旦进入Transformer模型的潜在空间,就很难删除。我希望未来我们能够安全地将AI与个人数据连接,但必须是双向的,你可以随时改变主意并断开连接。

Deep Dive

Shownotes Transcript

Translations:
中文

This is the Everyday AI Show, the everyday podcast where we simplify AI and bring its power to your fingertips. Listen daily for practical advice to boost your career, business, and everyday life. Consumer models and AI technology is getting so good and cool, it's scary, right? Without any real tech know-how, you can go use, as an example, Google Gemini Live and

Gemini's AI can instantly see your screen. You can use ChatGPT's advanced voice mode to interact with a neural low latency AI agent that can see the world around you, right? Microsoft Copilot Vision can see parts of the web that you're browsing that even you can't see. So there's obviously great power in letting AI see you. And then when you throw in all your data, I mean, the possibilities are endless, but

there's dangers as well, right? Should we be giving all our data to these big companies? What are the downsides, right, of using these things and giving them your personal data? But I think regardless of where you stand on the topic, I think today's conversation is an important one because this is the future of generative AI, whether you want it or not.

cameras and bodied AI, it's going to be everywhere. So generative AI isn't just a large language model you sit and quietly use in the silence of your own home or office. Generative AI is a live technology. And so we have to understand the power and the danger. All right. I'm excited for today's conversation. I hope you all are too. If you're new here, hello. My

My name is Jordan. This is Everyday AI. This is your daily live stream podcast and free daily newsletter helping us all not just keep up with AI, but how we can use this all to get ahead to grow our companies and careers. I want you, dear listener, to be the smartest person in AI at your company. And here's your cheat code, our website, youreverydayai.com. There you can sign up for our free daily newsletter where we recap

this show and every other show, as well as, uh, keep you up to date with everything happening in the world of AI. So, uh, you can also go to our website, sort 450 shows by category. So whether you want to know about the legal sides of AI guardrails, ethics, marketing, uh,

HR, whatever you want. It's all categorized on our sites from the world's leading expert free for you on demand. So make sure you go check that out. All right. This is technically we're debuting this show live. It is pre-recorded. So if you are tuning in for the AI news, it's going to be in the newsletter. Don't worry. But I think you're going to want to listen to today's conversation. I can already tell it's going to be a banger. So, uh,

Enough chitchat from me. I'm excited for our guest for today. So please help me welcome Michael Tiffany, the co-founder and CEO of Fulcra Dynamics. Michael, thank you so much for joining the Everyday AI Show. It's a pleasure to be here.

All right. I'm excited for this one. Michael, tell us a little bit about what you all do at Fulcra Dynamics. All right. We built a personal data store for all of the data that your life produces from wearables. So we collect a lot of biometrics. You can stream your calendars in, your location. The idea is to take all of your information producing systems and bring it together under your control into one place. So you can see it, make it truly yours, explore it, but also connect it with a helpful AI agent.

So, I mean, who's your average customer? Is it just like dorks like myself who, you know, maybe have like, you know, an Apple Watch or, you know, a couple wearables and they just want to biohack their life or, you know, what's like, who's your average customer and what's everyone using your platform for?

I'd say there are two different big customer types. One really is the biohackers, right? You have multiple wearables. And if you're living that kind of life, it's sort of annoying that every single thing that you buy comes with its own dashboard. That dashboard is probably only on your phone. And so...

If you want to see everything, you've got to check five different screens. And sometimes what you want to do is see everything and you want to see it on your laptop on a big screen or on your desktop where you have a really huge screen. So biohackers are loving Fulcra just to bring everything together and have one visual place.

So similarly how businesses have business intelligence dashboards, this is a personal intelligence dashboard for all your smart devices, right? Oh, totally. Yeah. No, we can get buzzwordy there. It's like the single plane of glass for your life analytics. Yeah. Okay. And then kind of similarly, right? Like...

So just riffing on bringing enterprise norms to consumers, people don't have a data lake. There's no place to plug an AI into. So the other category of users of Volcra are people who are using us as that data lake. So you collect all the data from all these systems, and then you teach an AI to do function calling against your repo, and bam, you've got a personal AI.

Amazing. And give me a quick rundown. So, you know, kind of assume, you know, watches, like, I mean, what other, you know, connectors or hardware or software do you all pull from? We really shine when it comes to the data that's like not already in files. Like it's super easy to upload files to an AI. Like no one needs help for that. And there's plenty of cloud storage that'll store files.

But how do you store, how do you make your own copy of like your calendar or your heart rate, right? Like that's a continuously updating stream and there's no streaming data store for consumers. So we had to build literally the first one. So that data tends to be like biometrics

I think we store your location history better than any other alternative. Like all those continuously updating things, virtually any IoT device, if you have smart stuff in your house and you want to make your own copy of that, that's a place where Fulco really shines. You can also upload arbitrary files. There's a library function. The idea truly is to take your de-silo data from whatever source and

and give you a single home for all of it. So we'll absorb whatever, though I'd say that the like unique strengths tend to be the streaming data. - So I definitely wanna dive in a little deeper on your personal side and personal experience of all this. But before we get there, I wanna zoom out and just answer the question, right? Answer the question of this episode title. What are both the power and the power

And the danger of letting AI see you at all times. Well, I'll start. Yeah, I'll do it in that order. And I'll very much make this personal. I want to be a cyborg. And I think I can be a cyborg before implants are possible.

If you look at consumer tech, this is a magic ring that knows when I'm stressed, which is beyond comic book technology. This is an amazing thing. Is that just the aura? Yeah, just an aura ring. Aura rings are magical. If you look at the total device footprint I have from a smart bed, connected scales, I got a car that's

It's practically a computer with four wheels. The capabilities are really high if all of that stuff was brought together and was really unified under my control. So I've been leaning into how all of these devices can actually augment my cognition in real time and make me effortlessly quantitative.

But I'm a hacker. I was a teenage hacker. I joined Ninja Networks. I've done hacker hijinks for my entire life. And so in my pursuit of being a cyborg, I also just cannot give up my security lens. And the danger here, the opportunity is that we can all be cognitively enhanced. The danger is that it's

It's really hard to delete stuff out of the latent space of a transformer model. You give it data and it adds it into a latent space and it's...

to some extent, like not really yours anymore. Um, so if we're going to use these models, practically there needs to be like an undo button where, where I can opt in to share with, you know, Claude or chat GPT, my location and my heart rate. Um, seriously, my custom GPT knows I'm doing this podcast interview right now and knows what my heart rate is.

But you need to be able to revoke that decision and go, actually, no, stop. Like you can't access this anymore. So I think the future I'm trying to bring about is one where we can safely interface AI with your personal data, but that has to be a two-way door. You have to be able to actually change your mind later and say, nevermind, you're cut off.

Are you still running in circles trying to figure out how to actually grow your business with AI? Maybe your company has been tinkering with large language models for a year or more, but can't really get traction to find ROI on Gen AI. Hey, this is Jordan Wilson, host of this very podcast.

Companies like Adobe, Microsoft, and NVIDIA have partnered with us because they trust our expertise in educating the masses around generative AI to get ahead. And some of the most innovative companies in the country hire us to help with their AI strategy and to train hundreds of their employees on how to use Gen AI. So whether you're looking for chat GPT training for thousands or just need help building your front-end AI strategy, you can partner with us too, just like some of the biggest companies in the world do.

Go to youreverydayai.com slash partner to get in contact with our team, or you can just click on the partner section of our website. We'll help you stop running in those AI circles and help get your team ahead and build a straight path to ROI on Gen AI. So you very clearly laid off

A little bit of the power and a little bit of the danger as well, right? Like you have to still have some hold of your data and security, but what happens, right? Because I think as the conversation in early 2025 has already shifted from large language models to artificial general intelligence to super intelligence, right? Yeah.

What are maybe the dangers as we look down the road? Because yeah, more and more people now are starting to use your advanced voice mode, your Gemini live, right? Like all of these live AI assistants that are so easy to use and actually really, really good. So, you know,

like what kind of dangers are we looking at in the, you know, maybe medium term as we have this quick emergence of, of new live technology that can see us. But then it's like, yo, like we're already talking about super intelligence now. Right. Right. Yeah. Um, well, uh, let's talk about a way in which society can go sideways, which is we can all become paranoid. Like the, um,

I think there's something deeply important about privacy. We should probably consider privacy a human right, a basic human right. And why? Because when you take privacy away, it messes with your head, right? We do not want all of our fellow citizenry to be paranoid about who's watching, right? And what data is being collected.

So a principal way that this can go wrong is that now that superintelligence seems to be within grasp and it can lift everyone up by being a helpful thought partner, that has to be driven by, let's call it observability, to use the nerdy term, and

That observability needs to have some privacy protections or it'll create this feeling like the feeling that we're always being watched, which I just think is not a good feeling. That's not a way in which we want society to head. And that is a near-term risk because...

Think about the number of, for instance, surveillance cameras that just have a security purpose that have ever been installed across the entire world. Well, we don't think of that as too creepy because there isn't an infinite number of people who are literally watching every camera. However, you add an AI model that can understand what's being seen. And every single surveillance camera that's ever been installed becomes an actual surveillance

that's interpreting what it's seeing. That's crazy. And it's like, it's not going to take

years of effort. It's almost a light switch. We just take the feed that already exists, we add the AI to it, bam. We now have intelligent eyes behind every single screen. So that transition from the safety of privacy to, wait a minute, you can't be paranoid enough, might happen much faster than society is ready for. And I think it's important to

call certain things out because AI and large language models move extremely fast, especially if you've been sleeping the last like, you know, four or five weeks, you know, because the reality is

All these models are multimodal by default now, right? Like as an example, the, you know, older quote unquote, older GPT-4 models, it was technically using three different models under, under the surface. Right. But now with the O or Omni model, it's all one. So these models, Gemini 2.0 as well, these, these models are multimodal by default, at least Gemini, right? It understands video. It understands audio, right? People think they're just some text machines, but they're not. Right.

And as world models become more and more popular, more and more available, these AI systems are going to know a lot in the more that we give them. So I actually want to rewind a little bit, Michael, and you kind of gave us a bullet point list of, hey, I have a smart ring and smart bed and all this. Can you just give us the full rundown, but also say, here's what I've learned from allowing AI to kind of see everything about me and how that's impacted your decision-making?

Oh, okay. All right. So I've experimented with all kinds of things. And so I'll walk you through some of the things that I found extremely valuable and then the duds. I'll go back to an eye-opening experiment I did last

coincidentally, literally a year ago today, which is when I first got my own custom GPT up and running with access to my Fulcrum data store so I could share all kinds of real-time systems with this GPT. I named it Operator. And in one of my early test queries, I was about to get on a plane to go to a hacker conference, ShmooCon, and I asked Operator, where should I have breakfast after my flight tomorrow?

Operator did what I expected, which is it made the function call, got my calendar information in a JSON blob, parsed it. It did something better than I anticipated. And I did not program this either as prompt or anywhere in the tech stack. It found the flight. Great. It looked ahead in my calendar, saw where I was staying, saw the hotel that I was booked at.

And it specifically made recommendations about where I should eat. Give me five restaurant recommendations that serve breakfast near my hotel, which is brilliant. Like I was expecting it to give me recommendations maybe near the airport, maybe in the city within Washington, D.C. It it was extra insightful by looking ahead to realizing, you know, eating near the hotel would be much more convenient than eating near the airport.

Um, so I think almost like chasing that magic that, that like, whoa, you did better than I, than I kind of asked for, um, since then. And, um, and as it happens, um,

I would say giving models access to my calendar has been extraordinarily fruitful. There's a whole bunch of inference that's available when you do this about just like who you are as a person. Like I literally did not explain

like who I'm married to, who my children are, but you can get that from the calendar looking at recurring calendar reminders, which was wild. And so then that's been a source of proactivity, right? To like be a good person.

which is a lot of fun. Location turns out, so my location history is constantly being generated from my phone. And now I have access via my Fulcrum data store and therefore any AI hookup to the location history is able to access it. It turns out that lots of memories, the way you encode your memories in your meat space neural network

it often uses the hippocampus to encode things relative to location. So when you faintly remember something,

There's sometimes like a location angle to that. You're like, oh yeah, Bob said something to me. Like you're trying to remember that thing, but you remember where you had the conversation. Then you can locate that. So here's like a wild stringing things together. You want to remember the details. You can get to a location.

Then from the location, I can get to a timestamp so then I can find it in my AI transcript driven by.

Whatever. Otter, for example. So lots of like following the threads to essentially have AI-assisted memory. Your brain is like this rich data store, but your lookup system is non-deterministic, right? Like you don't have a good search function on your brain. So if you can use the AI to help you with search, then it'll get you to the thing that triggers like the full memory out from your brain. So that's been tremendously helpful.

I've also... I've tried random stuff. I especially want to understand my own patterns, like my own patterns of eating. Because tracking your...

you're eating is, is a chore. Um, so I was like, can I outsource this chore to AI? Right. Um, can I use a, um, especially an image model, um, to, to, to make this easier. Um, and so, so I've tried some weird stuff. Um, here are two things where like, I'm still tweaking. Um, one is just using a cheap webcam and pointing at the refrigerator to just catch me when I'm, when I'm like, um, snacking as a way of, uh, it, it,

It's actually a way of not doing my work. I want to procrastinate and get up and go look at the fridge, see what's in the fridge. So it's been interesting to monitor that. I also tried, this is almost good. I installed smart breakers. So I'm getting a signal from all of the power usage in my house. And

And an AI model can apply inference to, for instance, look at the power to the stove to do effortless tracking about when I'm doing cooking. Now, that turns out to be noisy. This is almost good.

A future experiment of mine will probably use a camera pointed at the stove to try to capture what's literally being cooked along with power monitoring. And then the power monitoring will also reveal over time rhythms of my house, like how often are we eating dinner at the same time? Are there seasonal variations?

So like these are works in progress, though. I would say that broadly what I've been most happy with is almost like that. It's like the understanding my own patterns and helping me recall things when I can just pull on one thread, I get to the big memory. Yeah.

Yeah. It's so, so interesting, Michael. It's like, you've almost, you know, big brother yourself with, you know, some people are fine with it. Right. Um, even me, I'm like, like, I'm, I'm hearing you talk. I'm like, okay, I want, I want Michael to be like my personal biohacking, like mentor. Like, I don't know how to do half of this stuff, but like, I don't know.

always good giving all my data away, take everything. Right. But for your own personal privacy, I mean, do you have a, do you have a kill switch? Do you have an off button? Like, you know, cause people are probably thinking like, Hey, yeah, this, this could go bad in the future. If, if, you know, AI goes off the guardrails.

Are you still running in circles trying to figure out how to actually grow your business with AI? Maybe your company has been tinkering with large language models for a year or more, but can't really get traction to find ROI on Gen AI. Hey, this is Jordan Wilson, host of this very podcast.

Companies like Adobe, Microsoft, and NVIDIA have partnered with us because they trust our expertise in educating the masses around generative AI to get ahead. And some of the most innovative companies in the country hire us to help with their AI strategy and to train hundreds of their employees on how to use Gen AI. So whether you're looking for chat GPT training for thousands,

or just need help building your front-end AI strategy, you can partner with us too, just like some of the biggest companies in the world do. Go to youreverydayai.com slash partner to get in contact with our team, or you can just click on the partner section of our website. We'll help you stop running in those AI circles and help get your team ahead and build a straight path to ROI on Gen AI.

I know, right? Okay. So yeah, two responses there. Like you're not the first to make that observation. We've been kicking around at Fulker the idea of like some high-end consulting services, like the AI SWAT team, right? We're just going to show up. We're going to set everything up for you. Like just what do you care about? Okay, like we'll figure out how to monitor it, which I think would be kind of a fun business. But the kill switch is everything. Plus you need to be smart about like intelligent routing. So for instance,

I'm like a huge fan of foundation models, but I don't want to use them everywhere, especially when it comes to experimentations with self-monitoring with cameras. Because you're going to capture stuff that...

You don't want anyone else seeing, right? I'm not the only person who goes to the refrigerator and opens it. And like, sometimes people are going to be doing that in various stages of undress, right? Like, like this is not something to necessarily send to open AI. So, so in that particular case, you want to hook that camera up to a local image model. That's, you know, small parameter model. You can just run on,

you know, some local to you computer, you know, an old laptop or something that's doing the pre-processing, maybe discarding a whole bunch of stuff and pulling out the intelligence that matters for my, you know,

silly food tracking, right? So sometimes I think the answer is you want to use a combination of local models and foundation models and do a whole bunch of scrubbing where you just delete stuff. Second, and I think more globally important,

is that everyone who works in software engineering understands that you have to measure what matters. This is why we're hung up about observability. So if you don't have observability over the most important metrics, then of course you are not managing those metrics correctly.

Well, that applies to life as well. So in order for an AI to be able to help you, it kind of needs to see you. And what's important to me as someone who's worked in computer security for a long time is that that can't be a one-way commitment.

Right. I do think that a lot of tech behemoths are going to say, listen, we run the best model and we already host your email. We already have this data and that data. Why don't you give it all to us? And that freaks me out. I think that you don't want all of your personal information literally right next to the model. You want...

to grant a model temporary access to your data. And you want to be able to say, today I changed my mind and I'm not going to explain myself. I've just cut you off.

Yeah. It's interesting because, you know, Michael, you were there talking about, you know, as an example, you know, local inference, edge AI, you know, small language models. We talked about it on this show earlier this week, but some interesting research from Microsoft came out that, you know, gave, essentially they figured out model parameter sizes, right? And if you're not that big of a dork, right? Certain edge AI, right? So a

offline, essentially, you know, a small language model, you know, you can't run these huge, like the original GPT-4 was like 1.7 trillion parameters. But, you know, this recent Microsoft paper said that GPT-4.0 mini, which is a very capable multimodal model was only 8 billion parameters. So, you know, Michael, I'm guessing if we have this exact same conversation, January 10th, 2026,

We are going to have frontier models that in theory could live on that aura ring, right? How does this change the future of what's possible as these models get smaller, you can move them on device. How does that change it? But then also how does this change it for business, right? Like I get it. We're coming at it from this biohacking, which I love the personal biohacking angle, but as it becomes more powerful, how can this change what we can do for also our companies and careers? Yeah.

I think that people who have been working in enterprise software have a profound advantage in predicting the future of personal computing right now. Because one, we see NVIDIA coming out with like a local supercomputer, right? So if you've been working in enterprise software, especially...

as we cycle between like believing in on-prem, you know, believing in client server, which is now rebranded as, you know, cloud, right? Like you see the pendulum going back and forth. I think there's going to be a resurgence in like local compute. Lots of personal computing is essentially cloud-based at this point. And I think that local, the desire for private local inference is going to drive a mix

of on-prem and in the cloud for everyone, which is going to be a really fun transition to live through. It's not just OpenAI innovating in low-parameter models. Of course, Microsoft also recently released PHY4, which hit awesome levels of performance with only 14 billion parameters. Amazing. So I think these models are going to be within reach of local hardware. But...

The addition of inference to get better answers as illustrated by the amazing demo of 03 suggests that we're not going to be eliminating cloud-based frontier models for a very long time. Instead, we're going to have this mix. You'll have some local compute. Then when you want

to really think through a problem in some sort of hardcore way. If you want really advanced reasoning, that's probably not going to be local. It's probably going to be cloud-based. So incredibly, there's going to be this incredible burden of orchestration

The kind of stuff that we've all been struggling with as enterprise, like SaaS engineers, facing every consumer. If you think about it, every consumer is living a life that's much like the enterprise from a decade to go. We're a mix of on-prem and in the cloud, multi-device,

From multiple manufacturers, they don't all work together, but people don't have middleware to plug all that into. So the orchestration burden is real and it's basically totally unsolved. So if you're an entrepreneur thinking, how do I build a business that has a moat

as intelligence gets cheaper and cheaper, I think orchestration is this giant unsolved problem. So much that I want to dive into, Michael, but this would go on for many hours, right? But as

As we wrap up today's show, because we've talked about a lot, my brain's going in a million directions. I'm sure everyone else's is as well. I'm going to ask you to bring it all back for us. What do you think is the one most important takeaway for people to understand? Because more and more people are going to be in your shoes, businesses as well, as this becomes shifts from more of a personal biohacking to, oh, our company can start doing these things as well. What's the one most important thing that you want people to know about the power of

and the danger of letting AI see you? - Ooh, it's...

Wow. Uh, put yourself in charge by experimenting now. Um, like, like get started, um, make your own GPT, um, even without coding skills. So, so that you're almost like training your brain about thinking about ways to bring an AI to bear on, on the problem that you face. Uh, this is going to put you almost instantly on the leading edge, um, because, uh,

operationalizing these models requires almost like a feel for it, right? So you need to train your tacit expertise in delegating thinking to the model. So like that is the number one thing. And then my second takeaway is

Think about the data-producing devices that are in your life right now and where they live. So what third parties are you already arming with your personal data? And do you want that data to live there? Start getting control over your own, let's call it, data footprint.

Yeah, it's so important. I think great advice as we're all dealing with this swirling of innovation and data and technology and AI in the early parts of 2025. I think that's great advice that you just gave us all. So Michael, thank you so much for joining the Everyday AI Show. We super appreciate your insights. It was a pleasure to be here and this was an awesome conversation. Thank you. All right. As a reminder, y'all, that was a ton.

I'm not going to lie. My head is spinning with possibilities, ideas, all of that. We're going to be breaking it all down in today's newsletter. So if you haven't already, first of all, why haven't you? But you need to go to youreverydayai.com. Sign up for that free daily newsletter. Also, everything you need to keep up, it's all there and on our website. So if you haven't already, go to youreverydayai.com. Thank you for tuning in. Hope to see you back tomorrow and every day for more Everyday AI. Thanks, y'all.

And that's a wrap for today's edition of Everyday AI. Thanks for joining us. If you enjoyed this episode, please subscribe and leave us a rating. It helps keep us going. For a little more AI magic, visit youreverydayai.com and sign up to our daily newsletter so you don't get left behind. Go break some barriers and we'll see you next time.