We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode 325: Unmasking Hidden Bias in AI—Who’s Really in Control? Data Ethics & Responsibility with Dr. Brandeis Marshall, DataedX Group CEO

325: Unmasking Hidden Bias in AI—Who’s Really in Control? Data Ethics & Responsibility with Dr. Brandeis Marshall, DataedX Group CEO

2025/3/3
logo of podcast AI and the Future of Work

AI and the Future of Work

AI Deep Dive AI Chapters Transcript
People
B
Brandeis Marshall
Topics
Brandeis Marshall: 我认为人工智能公司应该像科学实体一样受到监管,因为它们会优化人类行为。这需要与科学过程相符的监管,并且需要对数据管道中的每个人负责。数据伦理关乎数据在特定情境下的使用方式,数据不能被武器化。许多人工智能公司实际上是科学公司,它们的目标是优化人类行为,而非人工智能本身。所有人工智能都是数据问题,数据伦理关乎如何负责任地使用数据以增强人类体验。数据管道中的每个人都对数据的处理和使用负有责任,包括开发者。开发者有责任质疑数据输入方式,控制数据处理方式,并负责任地传递数据。数据处理是一个系统工程,每个参与者都负有责任。即使是编写代码的低级别人员,也有责任质疑数据和算法中的偏差。数据管道中的每个人都对潜在偏差负责,需要积极发声。我认为人工智能可以用来揭示不平等,提供证据,并增强人类沟通。人工智能可以帮助人们理解术语,从而促进更有效的沟通。人工智能可以作为专业人士的辅助工具,帮助他们更高效地工作。人工智能可以帮助神经多样化的人克服沟通障碍,创造内容。人工智能应该被视为辅助工具,而非替代工具,可以帮助神经多样化的人。如果我们能够负责任地使用人工智能,它将成为一种强大的赋能工具。医疗保健算法决策过程应该透明,患者应该能够访问自己的完整医疗记录。患者应该拥有可携带的、属于自己的医疗记录。患者的医疗记录应该包含所有相关信息,包括算法的使用情况。我成功的秘诀在于积极的父母、全女子学校的经历以及支持我的人。 Dan Turchin: 作为主持人,我引导了与Brandeis Marshall博士的讨论,并提出了一些问题,例如人工智能公司应该如何受到监管,数据伦理的含义,以及如何应对数据中的潜在偏见。我还探讨了人工智能在医疗保健领域的应用,以及如何利用人工智能来促进公平与包容性。

Deep Dive

Chapters
This chapter explores the controversial idea of regulating AI companies like scientific entities, emphasizing the responsibility of everyone involved in the data pipeline for ethical decision-making. The discussion highlights the need for oversight to prevent harm caused by AI systems.
  • AI companies should be considered scientific entities requiring oversight and regulation.
  • Everyone who handles data has a responsibility for ethical considerations.
  • AI systems should not harm people.
  • Data ethics involves understanding the context and responsible use of data.

Shownotes Transcript

Translations:
中文

We really should be considering AI companies really scientific entities that therefore need a certain amount of oversight and regulation that aligns with the scientific process. And this is controversial. Understand that everyone who touched the data has a responsibility in order to say something.

It's not one dimensional. The ethics is really trying to unpack what is the context in what situation and how are we using it? And are we using it in the right way? Because data can't be weaponized, right? Good morning, good afternoon, or good evening, depending on where you're listening.

Welcome to AI and the Future of Work, episode 325. I'm your host, Dan Turchin, CEO of PeopleRain, the AI platform for IT and HR employee service. Our community is growing. We get asked all the time how you can meet other listeners. To make that happen, we launched a newsletter. In fact, we've now produced about 30 of them. Super popular. We include clips from episodes that don't always make the final cut.

And lots of fun facts as well. Go ahead and subscribe to that newsletter in the link in the show notes. If you like what we do, please tell a friend and give us a like and a rating on Apple Podcasts, Spotify, or wherever you listen. If you leave a comment, I may share it in an upcoming episode like this one from Antonio in Baltimore, Maryland, who is a college professor and listens while biking to campus.

Antonio's favorite episode is that excellent discussion with Armin Bergicli, CTO of BetterUp, I love that one, about humanizing work using AI to match coaches with employees. We learn from AI thought leaders weekly on the show. Of course, the added bonus, you get one AI fun fact each week. Today's fun fact, Ron Garrier writes in CAO.com about how we can shape the future of AI responsibly.

In it, he writes that as with any transformative technology, AI comes with risks, chief among them, the perpetuation of biases and systemic inequities.

To guide AI's development responsibly, we need to think of it not just as a tool, but as a growing child. Gerriard uses Dr. Yuri Bronfenbrenner's ecological systems theory to describe the evolution of AI. At the most immediate level is the microsystem, the developers, engineers, and users directly interacting with AI without diverse perspectives among developers. AI will continue to misrepresent and exclude marginalized communities. We talk about that often.

Next is the mesosystem, which represents the relationships between key actors, tech companies, governments, and researchers. Guerrier goes on to describe three other levels using ecological systems theory, the exosystem, macrosystem, and chronosystem. My commentary, as I've said before, AI is perfectly designed to replicate human bias. Be aware.

of that as you make decisions about how and where to use it that will have increasingly impactful and sometimes unintended consequences for your team and, of course, your customers. We'll continue to discuss that important topic as soon as, let's say, now. And, of course, we'll link to the full article in today's show notes. Now shifting to this week's conversation.

Today's guest is a vocal advocate for the practice of responsible data science. Dr. Brandeis Marshall is the CEO of Data edX Group, a data ethics and learning development agency that helps teams understand and rectify discrimination in data,

Previously, Dr. Marshall was a professor of computer science at Spelman College and a faculty associate at Harvard. Dr. Marshall received her master's and PhD degrees in computer science at Rensselaer Polytech. Go RPI Red Hawks!

Thanks to mutual friend and excellent former guest, Kai Nunez, for the intro. Without further ado, Dr. Marshall, it's my pleasure to welcome you to AI and the Future of Work. Let's get started by having you share a bit more about your background and how you got into the space. Awesome. Thanks for having me, Dan. This is going to be an awesome conversation. So a little bit about me.

Well, let's not start from the very beginning. Let's just start with the space of the fact that I was faculty for nearly 15 years. And I decided that my time was better spent outside of the academic walls for various reasons we don't have to get into. But just know that my background is of course computer science and I delved into data science because data is my tech jam. I love databases, data modeling, data analytics.

data engineering. And I just wanted to see it done better because the impact of data and how it is being used in all facets of our society is important to quantify, to understand. And yeah, that's how I got into the space. I just got frustrated with

By seeing data being misused and abused, and I wanted to be part of the change agent. I wanted to be part of the solution, not part of just calling out the issues.

What was it like transitioning from being in the ivory tower in academia to being a practitioner? What lessons did you take away from academia? And then what have you had to maybe unlearn as you became an entrepreneur? Oh my goodness. We don't have enough time to talk about all of that, Dan. That would be a four-part series. But the biggest lessons that I learned was as an academic, I was a CEO of one, right? I kind of...

I jokingly called myself a COO, right? Company of one as an academic. And that transitioned very well when I became a solopreneur because I knew I had to have my hands in multiple pots at the same time. And I also recognized when I needed help.

Not necessarily at the right time. Sometimes I was over the skis because as an academic, you tend to just push forward. And so I had to learn as a business owner when to go, no, I cannot do this. I need to hire for this or I need some support in how to get this particular activity or tasks done. But as an academic, I think the practice of public speaking is

is something that made the transition to the business world much easier because marketing calls or sales calls, as they're traditionally called, were pretty easy for me because I'm used to performing, right? I had to perform three, sometimes four times a week in front of a class of mildly interested students. Yeah.

to talk about data science and computer science topics. So to be able to take a concept and make it snackable

It's something that I definitely learned as an academic and being a department chair that then I was able to translate very easily over into the business world. And I tried to do my best to help others make that pivot or transition and showing them that the messaging component isn't as difficult if you sort of rest on your educational laurels.

I've said on this show before that in order to be a leader in a STEM field in the future, I believe that a grounding in things like ethics and philosophy and even sociology or maybe anthropology are as important as learning the hard sciences. Coming from academia,

When will we incorporate some of those softer skills or non-hard sciences into degrees like computer science and data science? I honestly believe it's already being done. It might not be called computer science right now or might not be called data science curriculum right now. But what it tends to involve is there's faculty and there's students involved.

who are now asking some critical questions around what is the impact of this algorithm? What's the application of this tool? How do we utilize the platforms in responsible ways? How do we step away from a particular platform? How do we migrate from one platform to another? And these types of critical questions that are being asked as part of the traditional curriculum is now becoming

bellowing and bolstering this conversation around we need to have better communication skills within the sciences. And what does that mean, right? So one of the epiphanies that I've had over the past few years watching this AI hype doom circle, I mean cycle, I don't really know what to call it, is that AI is very scientific.

But yet we operate as though AI is somehow an end goal. And it's not. A lot of the AI-informed, AI-powered companies are really science companies. Humans are the product that they're trying to optimize.

Right. They're trying to learn. Yeah, they're trying to learn the people in order to make the AI product better. So we really should be considering AI companies really scientific entities that therefore need a certain amount of oversight and regulation that aligns with the scientific process.

And this is controversial and it brings up a lot of, you know, feelings that people are like, no, no, no. It's a science. It's about optimization of the algorithm. Well, the algorithm is including people. And if you're including people, it shouldn't be controversial. If you're including people, then you need to ensure that the actions you're taking with the product are not going to harm the people. And that is very important.

very much aligned with a lot of social science practice when it comes to human subject testing. Yeah, in the opener, I talked about how AI is perfectly designed to replicate human bias. All AI is a data problem. Now, you and I live this every day, and you're an expert. So maybe for all those who feel like, you know, there is no intersection between data and ethics, what is data ethics? So data ethics to me is...

how do you responsibly use data in a way that continues to add to or augment the human experience? That could mean that the data

is trash, like it's not good anymore, it's aged out, it's not relevant, it's not pertinent or useful. It could also mean that the data itself is useful, but only in certain contexts and can't be applied to other contexts. So the ethics is these nuances of how you are using the data

and where you are deploying practices in order to leverage the insights that the data actually has for that situation or that circumstance. So yes, I do a lot of qualifiers here, but it's important to have the context. And I think that's what's really missing in a lot of conversations around data is that people talk about data as this big thing

black hole. But no, data comes in many forms and it needs to have context around it. It has historical context, social context, economic context. It's not one dimensional and we shouldn't see it that way. And so the ethics is really trying to unpack what is the context in what situation and how are we using it? And are we using it in the right way? Because data can't be weaponized, right?

Absolutely. But not in obvious ways. So let's say I'm a computer science student. I graduate from Dr. Marshall's course, and I'm a great practitioner. And I go and I get a job commercializing, writing code, doing some of the things that I learned, getting my CS degree. I'm not an ethicist. I mean, data is truth, right? I'm just there to serve some commercial purpose, build some software, build an algorithm that

you know, extracts insights, but I shouldn't be responsible for how the data was collected or, and yet. You are. Maybe you are. So unpack that. Why might I be responsible for the unintended consequences of the, you know, the latent bias in the data? Yeah, because you are handling the data.

Every bit of data is translated and then outputted as new data. So you as a developer, you as a manager of other developers, you as a marketeer in order to explain the data and the algorithms are all part of a system that is creating new data, which is how the original raw data is interpreted.

So you're part of the data pipeline. And as a participant in the data pipeline, you therefore have a responsibility to not only question how the data gets inputted to you, you therefore have control over how the data is handled when it's with you.

And then you have a responsibility in order to hand off the data to the next person in the pipeline in a certain capacity with certain context. So you are part of a train, right? A train cannot move with all of its carts not attached to each other. So you just happen to be part of the train.

And that train starts with the data collection. It moves through the data storage. It moves then to the data analysis. Then it moves to visualization. Then it moves, of course, to some type of storytelling or communication. And then, of course, it's going to be out the door, right? So it's going to be some type of product. So every one part of the train has a responsibility. And there should be this sort of from the bottom up

conversation around what is the ethics that is being attached to each part of that process and how are those ethics being addressed. So even you as a low person on the totem pole just writing the code, you have a voice to say, wait, I'm using this search algorithm and this search algorithm is known in order to do funky things, right?

Or hey, I'm trying to now reduce a person to a set of numbers to characterize them. And therefore, it's going to manipulate how maybe marginalized people are viewed versus non-marginalized people, right? You have a responsibility to say something and that should be attached to your output and your outcomes when you hand it off. And then the person that gets it next says,

has those biases, has that disparity understanding. And therefore, when they're handling the data and making their manipulations, they can then have that conversation and add to it and then move it on to the next and so on and so forth. So by the time it gets to the VP of products, and then of course to the C-suite, of course to the CEO, if it gets greenlit, then yes, it is the company. But understand that everyone who touched the data is

has a responsibility in order to say something. Hopefully that unpacked it. I took a little time with that. I like that concept of the data pipeline and everyone that touches the data, that whole kind of chain of custody. Everybody has responsibility. Yeah. So one of the examples that we talk about frequently on this podcast is...

When demographic data is used to make hiring decisions, just to make the point about latent bias in the data having unintended consequences, if we're relying on data to, let's say, what kind of candidate is going to be most successful here, the data has, let's say, less affluent zip codes underrepresented. Mm-hmm.

And so it's going to look like, you know, to be successful here, you have to have, you know, been raised in a, you know, quote, affluent zip code. That's one, I mean, fairly high profile example. But you see a lot of data sets and, you know, hear about a lot of use cases. What are some other examples like that where the listeners might not appreciate, you know, the impact that kind of these blind spots in the data can have? Right. I mean, a lot of it happens online.

At the local level, like yes, zip codes, but also healthcare has come up quite a bit in the networks and circles that I'm around. Where folks from a certain area may or may not have access to certain healthcare. Because there's not a hospital around or not a particular healthcare professionals around.

with certain specializations in their area. So that happens quite a bit. Also happens in the criminal justice system. But yeah, I hear a lot about it in healthcare, a lot about it in healthcare. Depending on the area of town you live in, what type of insurance that you have access to, if you're able in order to redeem the services that is part of your package.

Right. One of the funny, I shouldn't say funny. One of the observations that I like to share is years ago,

One of my coworkers, she's a single woman, and she likes to go on these road trips. And she said, you know what, since I travel by myself a lot, one way that I make sure that I'm safe is that anytime I need to stop and let's say go to the restroom or what have you, I never stop at a McDonald's. She always stops at a Cracker Barrel because they're right off the highway and they're always in a certain distance from each other.

And I just sat there and I was like, wait, that's so. But if you notice, if you think about the Crackle Barrels and the fact that they're all off the highway, they're also in places that cover the gauntlet of different zip codes. Cuz no matter where you are, it could be a rural, it could be urban, you're going to see a Crackle Barrel. Wouldn't it be beautiful if we had that type of connectivity?

In the same way, but yes. So I just want to share that story as just like a little of an appendage on how we can rethink how we use data correctly. Because I think Cracker Barrel having their establishments in places that aren't just urban, but still connect people of many different demographics. And that was a more equitable way in order to make sure that their business grew across the country and

without making it like the reverse, which would be a Starbucks that's like only in urban areas.

From the first time that we met, I wanted our community to meet you because you just radiate positivity, but you're also a pragmatist and you're a computer scientist. And it's easy to have these conversations and focus on the doomsday scenarios, the bot apocalypse and all the dire consequences of mismanagement of data. But another version of this is talking about

potentially using these same technologies for good. Yeah. And I'd love it if you talk us through kind of the other side of the conversation. What could go right? Why is there a reason to be optimistic about what some of these technologies can do specifically in the world of bias and breaking down some of the silos that have held us back culturally and as a society?

I think there is a great opportunity in order to leverage these AI systems and tools and platforms to call out some of the disparities and provide the evidence, right? Because AI doesn't forget anything. It's like the internet, nothing ever gets really deleted. And so what I have particularly enjoyed is seeing how people play back the tape, right?

these are where AI tools could be used. Like, okay, we have this body of work or this repository of information from three years ago, four years ago, five years ago, 10 years ago. And now we can call it back up like the digital Rolodex and say, this is what this person said. And this is what they're saying now. Here's the evidence of

So I think there's a way in actually to use AI tools to help with that distinction. I also think on another positive note, I think there's a way in order for us to be augmented by AI because there are times in which I'll be sitting in a meeting

And I'm sure you have this same issue, Dan, is that people start saying terms. And it's like term soup. You're just sitting there like, what does this really mean? Because your understanding of it is one way. Someone else's understanding, because they're from a different discipline, have a different interpretation. And the way that I have used certain aspects of generative AI is to help me fill in gaps quickly.

not for it to be a 100% be-all, but just to give me like, oh, that's what they probably mean by this term. Now I, as a practitioner, can now ask a question. Okay, I'm talking to you about this particular topic. Here's the term. I want to make sure that we're clear about how this term is being understood by both parties. So therefore,

As we're having conversations on how to move forward with the data strategy or implementation, we are saying the same thing and we're not crossing signals. So that's where I've really seen a benefit. That does mean that individuals need to have a fundamental background to know what type of questions to ask. But I think AI could be a good benefit in order to augment the conversation and move the

forward in faster ways because now you have a tool that can provide you just that little bit of knowledge so that you can then be more productive and more effective.

I also think AI could be used in certain capacities for higher ed, and that would really be on the professional side and those in graduate schools. I still have a lot of issues with the K through 12 and the college level, but for professionals, for grown people, grown adults that already have a degree or two under their belt, I think

having an AI type of tutor or support could really be beneficial because sometimes when you are working full time, you are in the midst of a deliverable timeline that you need to get something done.

It'll be great to be able just to ask an AI a quick question. You can then get the answer. You can then move on with your daily task and not have to stop, have a meeting, have all this discussion, and then get back to your job. So I think there is, as I said, for adults in the workforce already got a degree or two, I think the AI as a tool

workmate could be very helpful in certain situations. So yeah, I think that there are possibilities. Again, I think there could be more. I like the creator space and creating content in certain ways. I do have, of course, some of the issues with a lack of credit to some of the creators who happen to be melanated. But

What are ways in which people who might not want to show up on video could create an avatar of themselves and therefore be able to do video? Or what are the ways in which people who are, for instance, neurodivergent that are struggling in order to get ideas together can?

That AI can help them take all of the word clouds and put it into a coherent sentence for other people to understand, right? Those are benefits that I think could revolutionize how we be more human while AI is of, I said, support, not replacement. But again, I think...

our industry isn't having those types of conversations. And I think that's the hardest thing for me is when I hear from, for instance, neurodivergent people that feel shame for using an AI tool. And I go, no, I would want you to. If it's something that would help you and relieve your anxiety, I would rather have you tell me that you're using it. And therefore, as an instructor or however, I can then go, it's fine.

And then you can continue to use it and then you could feel more confident in the coursework and then feel more confident in the discipline and then be able to move forward and not be shackled by this quote traditional learning environments that are not built with you in mind. If we choose to put the right guardrails in place and use it responsibly, to your point, it's such an enabler.

Yeah. And so along those lines, we recently had a great conversation with Keith Sonderling, who is the commissioner of the Equal Employment Opportunity Commission. And he's actually now the incoming deputy labor secretary. And just a wonderful conversation. And he actually, you know, also an AI optimist like you and I. He said employers want to do the right thing. And one of the things

positive consequences of using AI to make hiring decisions is that it illuminates some of the biased decisions that were being made by humans before. All of a sudden, the data shines a light on how we could improve. Because most employers want to hire a diverse workplace and want a diverse representation of skills and backgrounds. Yeah.

But through cultural reasons that, you know, accumulate over decades, they have a hard time. So it's one example of where humans wanting to do the right thing can be enabled or empowered through responsible use of A.I.

Is that reflected in your work? Do you feel like there's a general intention to do good with AI? Or what kinds of questions do your clients come to you asking about with respect to how to use it in the right ways? Right. I mean, a lot of their questions is around what data do we have that actually is useful? Because I know there's a lot of conversation around AI.

But what I find with my clients is they don't necessarily have a data strategy in place first. So they have data workflows that they haven't documented that they then realize, oh, we have gaps here and here, and we need to fill those gaps before we can adopt certain AI tools. Or this is the reason why the AI tools that we've adopted aren't being optimized is because we have these gaps.

So a lot of the conversation is around how do we identify the gaps that we have and then fill them? So what data don't we need to collect anymore? Which data should we be collecting? What principles should we be outlining for our internal teams?

Because the marketing team is going to have a different set of data inputs and exchanges than, let's say, the finance or the legal team or the DevOps team. So they're all trying to get this landscape of where they touch data. It's almost as if they have a pie and everyone is looking at their own slice.

And so their questions are about, okay, now we understand there's a whole pie. There's not just these random slices. So how do we make sure the pie comes together and gives us the full picture? And it can be daunting. So some of them are overwhelmed. But then breaking it down into those snackable tasks for each team becomes where I do my work.

I give that direction so that they can make informed choices about which AI tools they really need to use. And then how to get their team to adopt or not adopt under certain circumstances. Because that's really the friction. Some people in the organization are like, let's adopt AI everywhere. Other people in the organization are like, no, never.

So how do you bring both sides together, honor their position, but also make sure that you're able to move forward the business objectives and make sure to align the KPIs in order to get the ROI that they need. So it's a complicated issue, but that's what I'm here for. I'm here to deal with the complicated because...

I like it. Someone has to. Someone has to. I like unraveling. It's like, oh, I'm unraveling a puzzle and I'm putting the puzzle back together. Right? So...

I like that. But yeah, that's really the questions that come around in my space is they really do want to do good, but they just have no idea where to start. And understandably so. There's not a core curriculum in K through 12 or higher ed that says, here's the data skills that you need, or here's the AI fluency skills that are required before you graduate

you know, high school or college. You just kind of have to pick things up on your way, which that's a whole separate conversation. I think we need to institute a whole digital skills requirement, but we can save that for another conversation. That's the next episode we record. Yeah. So I want to go back to your example of healthcare. Not if, but when Dr. Marshall rules the world.

I give you a scepter and you have ultimate power for citizens who have been, let's say, harmed by an algorithm. You know, their claim was denied, for example. What rights should we have to understanding when and how an algorithm or AI was used to make a decision that impacts something, you know?

like health care? Because right now there's no obligation for disclosure. You know, it's it's a very opaque. Yes. What should the process be?

Of course, I would say the process needs to be very transparent because it's very opaque right now. And it's by design that the healthcare industry does have its own silos. And so one person out of an organization, healthcare organization might not know what another person out of the healthcare organization is doing, especially on insurance. So if I ruled the world, I would say the process would need to be one of

noting when an algorithm was used in order to make a decision, make a diagnosis in particular. So one of the biggest struggles is with patient records.

You as an individual getting a patient record, getting a patient record for a family member, especially if you're the parent or guardian, elder care, child care, and everything in between, right? Partners and things like that as well. So having a platform where you can actually access your records regardless of

Of which insurance company you're affiliated with. That record is with you as an individual, like your social security number. So that no matter where you go, you still have records of everything from your medical care to your vision care to your dental care.

And that is a digital package that goes with you throughout your life. Very much like a social security number. That would be, if I ruled the world, what I would want to see. And inside of that record would be the doctors you spoke to, what were their notes, um,

Any type of x-rays or treatment that was done, that would be inside that documentation as well. Just who you engaged with, when it was your previous appointments, when are your next appointments, it would all be there. Right now, it's done, but it's done based upon the...

The hospital you might be affiliated with, so they have a certain piece of your record. It might be with the insurance companies. They have a piece of that record, but there's no centralized place. And so if I ruled the world, that's where I would focus. And then, yeah, if there was a dictation tool used, it would be noted. Dictation tool used in order to do these notes from doctor so-and-so on date so-and-so.

Right. So then if something happens to you and you're unable to advocate for yourself and your designated advocate could be a partner, could be a family member, could be a friend, it's already attached to you. Sounds so obvious when you say it like that. Yeah.

Right, cuz now we have like there's alert bracelets and there's, you buy this particular service in order to notify if you fall. And I mean, there's all these other technologies that are used, but they're not plugged into a centralized system. So that means that you have to have access to that login of that service and have the authorization to get it in order to, I mean, it's just so much. But then that does mean,

That you need to have strong data protections and data security so that someone doesn't get access to essentially your health file and then try to impersonate you or imitate you, right? So that does mean that there needs to be stronger guardrails around data protections. But I mean, I don't know about you, but data breaches happen so often that

that at this point, what's the harm in having it all centralized? We're way over time, but I told you it's been so worth it. And I'm not letting you off the hot seat without answering one last important question for me. Yes. This one really, yeah. I want to make sure that we, uh,

We have a chance to talk about this one. So I'm guessing there was a little girl version of Dr. Marshall. And probably at some point, you know, you received some messages, you know, from society that, you know, you cannot...

be successful in a STEM career. You cannot be a college professor. You cannot get a PhD. And yet you overcame. So who were your role models who taught you that you could achieve what you've achieved? And what's your lesson or what's your advice to all of the other little Dr. Marshalls out there that need to hear, you know, that they can succeed too? Yeah. Thank you for the question. So

Oh, what would I say? So, well, first, I did go to private school. So let's put that quantifier there. I went to private school. So my parents, I think, were instrumental in ensuring that I did have education as a foundation. And so it wasn't just private school as like private school is like a silver bullet. But they were very active in making sure that I, yes, did my homework.

They quizzed me on things. They taught me at home certain things, right? I learned how to count coins from my mom. I learned how to organize files from my dad, right? So I was doing the chores. So I think having active parenting and having a really strong educational foundation is important.

I am a product of an all-girls high school, so any type of mean girl antics I got, and I got them very early. So by the time I got to college, nothing fazed me.

So I definitely will tell other, I guess, little Dr. Marshalls in the making that, yeah, it's okay. You're going to rise above. Yeah, I think those would be the most important role models is really my parents. And then after that, it's always just been certain teachers that have advocated for

Certain individuals that have helped support seeing my own vision and seeing my potential. So it's really about the circle of people that you are around. Do they see you or are they trying to use you?

And I had enough people along the path that saw me and supported without wanting something in return. And so I hope to pay that forward. And I try to do that moving forward. And that's why I have Black Women in Data Summit, trying to encourage mid-career women, especially Black women, in order to invest in themselves and bet on themselves. And they don't need to always invest in the people. They can invest in themselves.

and it's a good thing. So yeah, you can do all the things. You can overcome the friction and the strifes and the meanness of people by not letting them stop their show, not letting them dim your light, not letting them dictate your path. Follow your gut and keep it and keep going forward. Brilliant answer. Thank you for sharing that. You're welcome.

Well, gosh, this one's sped by and we are going to have you back because we're just getting started. Yeah. Thanks for hanging out. This is so much fun. So much fun. I would love to come back. Can't wait for the next, next round. You got an open invite. And of course, thanks again to mutual friend, Kai Nunez for making the intro. Before we wrap up, where can the audience learn more about you and the great work that you're doing?

So you can definitely check me out on LinkedIn. That's where I hang out in the digital world. You can also check me out on, of course, my company website, dataedx.com. You can look at brandicemarshall.com. That'll take you to Data EdX as well as Black Women in Data. But yeah, I typically hang out on LinkedIn because...

It gives some good information. We have some spirited conversations and try to have some good positive times in the digital space. So that's where you can find me. Excellent. Well, great work. And gosh, that's all the time we have for this week on AI and the future of work. As always, I'm your host, Dan Turchin from PeopleRain. And of course, we're back next week with another fascinating guest.