Data Surveillance Archives - Altitude Accelerator https://altitudeaccelerator.ca/tag/data-surveillance/ Sun, 24 Nov 2024 05:38:25 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://altitudeaccelerator.ca/wp-content/uploads/2023/11/altitude-favicon-45x45-1.png Data Surveillance Archives - Altitude Accelerator https://altitudeaccelerator.ca/tag/data-surveillance/ 32 32 Emily Reid, CEOAI4All: “AI Will Change the World. Who Will Change AI?” https://altitudeaccelerator.ca/emily-reid-ceoai4all-ai-will-change-the-world-who-will-change-ai/ Wed, 23 Oct 2024 20:25:57 +0000 https://altitudeaccelerator.ca/?p=137648 Transcript  Hessie Jones Hi everyone, welcome to Tech Uncensored and my name is Hessie Jones. I’m pleased today to welcome Emily Reid, who is the CEO of AI for all,… Continue reading Emily Reid, CEOAI4All: “AI Will Change the World. Who Will Change AI?”

The post Emily Reid, CEOAI4All: “AI Will Change the World. Who Will Change AI?” appeared first on Altitude Accelerator.

]]>

Transcript 

Hessie Jones 

Hi everyone, welcome to Tech Uncensored and my name is Hessie Jones. I’m pleased today to welcome Emily Reid, who is the CEO of AI for all, and this is an organization I’ve been following for a while and their their mandate is to promote diversity and inclusion. Within the artificial intelligence sector. So I’m going to throw a couple of stats at you because we’ve seen AI emerge really quickly within the last eight years. But what we see consistently is that the male voice still dominates within the sector. In 2018, World Economic Forum actually reported that only 22% of AI professionals globally were female. Linton indicated the same thing, 16%. Of AI professionals within their platform will. Women also, 12% of researchers worldwide are are women, and if we look at the engineering faculty track only sorry, the tenure track 2.6% identify as African American or black and only 3.6% identify. As Hispanic. So one of the Co founders of AI for all Feifei Lee, is the distinguished computer scientist. She’s well known. She’s a professor at Stanford. She led the development of Image net, which is a large scale database of labeled images, and this has been really profound and crucial in advancing. A lot of the work done for deep learning as well as computer vision and Baby Lee and her Co founders of AI for all Olga Lukowski and Rick Summer. All recognized that there was this substantial gender and racial gap when it came to stem, as well as the AI field, and they recognized that we needed or they needed to create opportunities for underrepresented groups to engage with and. And tribute to the field of artificial intelligence. If you go on the website of AI for all you’ll see. These words AI will change the world. Who will change AI? So I’m excited today to. Speak to Emily Reed, who is the CEO of AI for all, and she and I are going to address some of these current concerns about AI as it rapidly materializes into our work, our personalized. And how this organization is actually seeking to influence? A more inclusive future, so thank you, Emily, for coming to speak with me today. 

Emily Reid 

Absolutely. Thank you Hessie so much for having me. Appreciate it. 

Hessie Jones 

So let’s start off with you. Tell me a little bit about yourself. Like how like your interest in this topic, what you have done and how you end up at AI. 

Emily Reid 

Overall, yeah, absolutely. Thank you. So in terms of my own story, I think one of the fastest ways to describe myself as I’m a computer scientist raised by educators, both of my parents have been educators their whole lives. They grew up in a really working class. Background and really kind of use education as a way to bring themselves into the middle class. My father grew up on a farm and became a professor. My mom grew up in a in a project and became a lifelong elementary school teacher, and I’ve always looked at education as a way to solve problems. Though initially in my career I thought I wouldn’t go into education at all. It ended up being something that I really kind of couldn’t, couldn’t, couldn’t. Away from I studied math and computer science in college and that is around when I really started to develop an interest in and concern about the lack of diversity that I was seeing in the spaces that I was in. I was usually one of the only women you know, maybe one of two or three women in a lot of my computer. Science classes and then eventually when I went into the tech workforce working as an engineer. I I saw the same thing and was also becoming increasingly concerned about what I saw as sort of a lack of ethical frameworks around a lot of the work that we were doing. You know, it felt a bit like kind of the Wild West. There are a lot of other disciplines like law or medicine that have. Then you can, you know, criticize the the structure, but they at least have a structure around and framework around ethical standards, and that’s not something that is standardized yet in the AI space and so. For me it was. I was kind of really becoming concerned by what I saw. I was also experiencing a lot of personal frustration and challenges as one of the only women in this space, and I also knew that I was walking into those rooms with a ton of privilege. And so it just spoke to. To the fact that this was a really deep problem. And so that was really when I became increasingly interested in how we could be using computer science education to address some of these problems, you know, kind of where what’s the root of this issue? I always say to folks, I think computer scientists are problem solvers at their core. And this was the problem I became most interested in. So I ended up going back to Graduate School for AI, and I did research and work in computational linguistics, natural language processing, machine learning. But it was during that time I was also kind of going deeper and deeper into computer science education. I ended up joining what was in a small organization called Girls who code was with girls who code through our kind of hyper scaling period and really learned how to scale high impact education programs. I came to AI for all in 2018 to launch our Open Learning program, which was a program to really take a lot of what we had found as successful in the our early high school programs and take it online and bring it into schools directly. During that time, we were also launching our college programs today, after all, and that was where there was really kind of increasing interest in getting students directly into the workforce. AI was really picking up. And I think what’s been interesting, I became CEO about two years ago and what’s been interesting. During this last two years, has been seeing this real evolution in and and kind of resurgence of AI and generative AI. And of course when. Chachi, BT then Gemini Dolly, like all of these tools, came out. They really they really had such a huge cultural impact. And I think that what’s been interesting about being at AI for all through that period is it’s some, it’s a moment that I think our founders really anticipated when they started that. Original program that you described, they really were. They knew that this moment would come and this society would need to be much more ready than we were. And so I think our, you know, our mission to create the next generation of AI change makers that the world needs. Is really focused on the fact that this is a. This is a train that has left the station. I understand when folks are really concerned about where AI is going, I think those concerns are valid. I also understand when folks get extremely excited about where it’s going and what the opportunities. And I think that’s valid as well. To me, we have a we have a choice about what the future of AI looks like. It’s not something that is already set in stone, but it is going to be written by this next generation of AI technologists. And if we don’t make some changes now around what the diversity of that cohort. Of technologists look. Like and if we don’t make some changes around what the industry standards around, you know, trustworthiness, human centered AI, responsibility, ethics. If we don’t make some changes there, then I think we’re we’re on a really troubling trajectory. So now is the time. 

Hessie Jones 

So like I think even when AI was starting to emerge and you, you and I had this conversation earlier, there was. Is understanding that there was a diversity gap and so industry, not only the tech industry but even outside of tech and even in an investment side, they started making strides to ensure that there there was a lot more representation, a lot more voice. Their diversity of voice, when it came to technology or when it came to actually investing in founders, what have you witnessed recently that that seems to have? I don’t know makes it makes it seem that that we’ve taken a couple of steps back since that recognition. 

Emily Reid 

Yeah. Yeah. No, I love this question has because. I think it’s. Really cuts to the heart of a of a kind of concerning, but also potentially opportune time that we’re in and so. Uh. You. Know. 2022 in particular was an interesting year because when I started out in this role, AI had to actually kind of briefly left the headlines for a little while, and everything was about crypto. And we said, you know, kind of how what’s our kind of position around crypto. It’s like a lot of folks. Like students, funders are asking. About and how do we really make sure that we continue? You know, we know that AI is still going to be pivotal, but it’s sort of not the hot topic right now, which I think people forget about in the wake of the generative AI explosion. But there was really that that period for. For a good part of that year and then we went through towards the end of 2022 tech layoffs, which had an enormous ripple in our industry of computer science education and workforce develop. And because many of these organizations are, if all included, work with corporate partnerships as part of our funding models and it’s something that we’ve really valued because part of what we want to be able to do is to launch these students into their first roles in AI. So we want to understand what’s going on in the workforce. You know what are folks hiring for? That is a space that is changing much more rapidly than the universities that the students are in. And so we see ourselves as really kind of a bridge between that university experience and the workforce that is so rapidly changing and so. You know what we were definitely seeing during that time was there folks that I knew in the tech industry who maybe worked specifically in the DI space or the corporate social responsibility space, that was really kind of getting more constrained at the same time there was, you know, about a month or two. Later, there were these real enormous booms and investment in AI, and those were the teams that were grown. And I would say that folks who were maybe on like responsible AI teams, that was a little bit in the middle, that wasn’t necessarily growing in the same way that like an AI product team was growing, but it was also, you know, AI specific. And so it might have, you know retained. Been retained or prioritized in a way during those tech layoffs, so everything that’s going on in the tech industry ends up having a real impact on the nonprofit organizations that are partnered with tech companies. And so you know, for us it was a it was an interesting period because there were constraints with some of the partners that we were speaking to and there were others who were, you know, excited or coming to us because they wanted to get more involved in AI organizations. I’ve unfortunately seen over the past couple of months, some really wonderful. Organizations like women who code girls in tech global need to shut down and and I don’t know the details around why that is, but I have to imagine that there’s some element of this is sort of a. This is a really critical issue that I I don’t want the tech industry to forget about. I think that there was a lot of progress made for a number of years and the kind of layoff period I think has rolled back some of that progress. So my my feeling is that. We really do have a choice around what the future of AI looks like. We have a choice around what the future of technology looks like. I think I used to really talk about those things separately, but the way that tech has really been developing and AI has been developing is that AI is going to be part of everything. I think junior Rometti from IBM said that AI is going to change 100% of jobs. 100% of industries and 100% of professions, and I I don’t think that that’s hyperbole. There might have been a time where I felt like that was hyperbole and I no longer feel that way, in part because. I’ve and I’ve heard some folks liking this, and I think it’s a reasonable comparison to liking it to the Internet. There were people who said Ohh the Internet. ‘S going to. Be. A fad in the early 90s, right? And we all know that that wasn’t. That didn’t end up being the case. In fact, access to Internet is really critical for students to even have access to learn about. Some of these tools. Never mind. Learn how to. Program them. So that I think is that kind. Of. Really rapidly changing nature. To me means that this moment is very critical. It means that if we don’t make changes right now, then the status quo of a really homogeneous industry that does not have a standard ethical responsible framework is going to be the one that we will move forward with as a society. And it’s going to be increasingly difficult to change as the years go on. At the same time, I think if we are making a lot of changes now, that will become the model of what the future of AI looks like because things are changing so rapidly. So I have both a deep, deep concern about where we could go on one path and a real excitement and sense of hope about where we could go on another. 

Hessie Jones 

It’s it’s the you know I have the same some some days it’s better than others. But like on on some. The areas where where some of these amazing DIY initiatives have failed, like I’ll add, the one that I know for this fund which was specifically started to invest in women of color or women founders in tech, was recently closed down and it had raised over. $25 million the It’s closure set ways through the industry because it was so successful in its race, but through legal issues which they could not afford, they unfortunately were unable to to continue. But I will say from an investment perspective, what I’m seeing is that investors, especially when they’re starting to look at what gender today I can do, they’re increasingly anxious about the technology and not really sure what to trust and what can’t be, what, what to trust and what not. Trust. And so they’re looking to to people, to who who know the tech to get deeper into it, to even provide some elevated form of technical due diligence in order for investors to to be able to say this is good enough for us. So I think in a lot of ways. In advance of legislation, I think it’ll it’ll happen just because you know investors need to make their. Money. And if money talks, then this. Is the way to do it right? 

Emily Reid 

It’s so true. It’s so true, because I think, I mean I I think that is unfortunately a perfect example of, you know, how it affects the the the investment in industry side and I. And it’s interesting. What you what you say mentioned about regulation because I felt like that you know, really a lot of my conversations with partners and advisors in our space in 2023, no question was around generative AI, generative AI. We don’t want to get left behind. We don’t want to kind of miss. This boat, like how do we use? I had. I had friends who work in like relatively non-technical industries like maybe like like a A. You know, might be technical in a different way, but they’re not necessarily using AI in their day-to-day. So I had a friend who’s works in kind of more of a sales position at like a bio Med company called Me to. Be like, well, how do we what? Should our AI strategy be, you know, they were all of a sudden there was kind of so much focus on we don’t and an anxiety as you mentioned around we don’t want to get left behind. And we want to make sure we’re taking advantage of this. And then most of those same conversations this past year or the last six to nine months or so have been around governance. And what does AI governance mean and how do we make sure to manage these risks in the absence of any kind of real legal framework? And that’s something that actually our our founder, doctor Faithy Lee has talked a lot about that we are. You know, looking at what, how that might be something that some of our students are interested in because there is a real lack of. There’s a real lack of overlap in the expertise for folks who are in the kind of policy and legal space. And the AI space. And there really is a need. I think this has always been the issue with technology legislation is that it’s moving at so much more quickly than the legislation and policy. Move and you have so few folks who are really sitting at the intersection of those two areas of expertise. And so I do hope that we will kind of get to a a, a great framework. But I think it’s going to take more folks who have that computer science background working in that space or maybe advising. Not work. 

 

Hessie Jones 

OK, I want to touch on when we talk about AI advancement and why diversity is so crucial and we’re at a time right now where we’re starting to see some of the impact of why, like, why DI is actually needed. And so I want you to talk to me a little bit about the anthropomorphization. AI and it’s money I actually got through saying that word because it became. A. A word that it is so difficult to say, but it is a mainstream term and so when we talked about this, we’re talking about the attribution of human characteristics, whether or not it’s motion, whether or not it’s it’s, you know, it’s gender related. But we’re starting to see it a lot. More in in some of these AI chat bots, and I know it became a little bit of an issue when when Siri came onto the market as well as Alexa and Google Assistant. But I want you to to speak to me a little bit about. What the implications are from a gender perspective, as these AI chat bots start to roll out in in massive ways. 

Emily Reid 

Yeah, absolutely. I think it’s a, it’s a huge issue and agreed that well, it’s a little bit of a mouthful. It’s good that anthropomorphization of AI has become a little bit more of a household. Concept because I do think that it’s. It it, there’s a lot of complexity to what what the impacts of a really. Of an AI that seems like it could pass the Turing test, so it would be it would be difficult to tell whether this is really an AI or a human. There’s definitely a lot of evidence that users human beings prefer interacting with a an AI that does seem very human. I think you know there’s a lot more awareness of the concept of the uncanny valley, the idea that. That folks are are have a really positive reaction to an AI or a robot until it gets almost too human. And if it’s, if it’s kind of just short of being too human, it’s it’s, it’s really uncanny and that can be, you know, something that then people reject. So there’s there’s some interesting ideas around, like, you know, whether you go kind of further along on the spectrum of recognizing it’s not in AI or it’s not a human being, or whether you really try to make it as human as possible. And part of to me, part of the challenge with that is. It does become confusing. I see my my I have a three-year old daughter. She talks to Siri. I’ve turned my Siri into a male voice just to mix it up, but I think that it’s, you know, she she will chat with Siri and try to ask him questions her questions in ways that. You know, she’s aware that it’s not a real person, I think. But they are kind of growing up around these technologies as though they may be human beings or it might be difficult to. Differentiate which is. Which, especially on the gender side, I think this is a it’s a really big issue because they continue to see the vast, vast majority of voice assistance. Tend to be female coded in some way, whether it be the name and the voice, or both. That, to me is a really, really big concern, in part because it continues to kind of put women in a position of the kind of stereotype of being a helper, being an assistant. Part of the reason why I know that that happens beyond our. Our kind of gender norms and societies that some developers say that folks respond better to a female voice, and that’s part of the the testing, right. And so that might be a legitimate reason, but what are the other impacts of? That and is that the only thing that we should be valuing? To me? This is this is the kind of. This is the sort of issue that we need to be grappling with in an ethical framework for AI. What are the ethics of that choice? I also think that it in addition to it, just really reinforcing gender to gender stereotypes. It can. I think. I really believe that it is something that if we are having a much more inclusive industry, it’s not, it’s certainly not on the shoulders of the one female engineer in the room to make to to kind of raise this as an issue, but. As we have a much more if we are moving towards a more. Inclusive more diverse industry, I think we’re going to have much more nuanced conversations around what this could look like, what our default assistant voice should be, what kind of options are available to change that and being able to properly evaluate what the risks and harms are. We don’t really have developed standard frameworks for that at this point. And again, I think that if we don’t change that soon, we’re going to be in a world where Voice Assistant and female voice seems synonymous. 

Hessie Jones 

Yeah, I wanted to add to that because like. There there are. A lot more emotionally supportive chat bots. I actually tried one. It’s called my pie and it’s supposed to act as a coach and a confidant. And sometimes I ask the questions like how do I deal with this person? Who is highly sensitive or who is an intern? And I really want to convey this message, but I do want I want to do it in a way that’s non confrontational. And it is early effective in its advice. So I I think that so it brings up a couple of questions which which I want you to address is it’s one thing about dependence, but then the the second thing which you kind of covered in the in the last statement you made but. 

Hessie Jones 

But the legalities when it comes to the advice that’s given and and who’s going to be liable if if somebody takes that advice to heart. 

Emily Reid 

Yeah, it’s a really it’s a really, really rich example. I think because I think there’s a number of of issues that it brings up kind of first up point about kind of going backwards here. One of the last things you mentioned around the like what what’s going to be shared, who’s liable, I started out. My my career actually in cyber security and got into machine learning through that process and. So I’m always looking at all of these models with a A-frame around. What? What was what? Should have been the sort of privacy restriction on the data that was used to train this model. And as we’re continuing to feed more data into models, how you know? How much kind of personal information or company information might we be giving up? How is that being used? You know, I think that that’s a that’s a huge issue that companies and individuals are grappling with. But at the same time, you know another example of this type of bot is there was a story recently about one of these kind of emotional support chat bots that are being used with more lonely adults. And we know that loneliness for. Older generations is a really, really huge mental health problem, and so I think to me it’s a. There are some real difficult questions there. You know, one is. I think I think the sort of negative reaction, visceral reaction that folks have to something like that comes from the place of like, we don’t want this to be a Band-Aid, but at the same time the the current. State of affairs is is also not working and so is it going to be the case that maybe having more of those emotional support chat bots available to folks is gonna actually help them, at least in the short term. But then it also runs the risk of being a Band-Aid to a problem. And not really actually treating the the the deeper, the deeper disease, right, like kind of just treating the symptom, not the disease. And so I think it brings up some really challenging questions because I think it is. It’s concerning to folks to to really kind of put these emotional, put some of this emotional labor in, if you will, into the into AI’s, but at the same time it might actually, you know, compared to. The world as it is today. Without it, it might actually help some people. And so I think that’s one of the biggest challenges is really how do we navigate that part. What you mentioned about the pie chart bot I think is a really great example because. You know it. It brings up OK. It is kind of maybe giving advice that you could imagine a world where this is more veering on the could be something that, like therapists are using, right. Like this. There’s one thing that you might you might be using in conversation with your team. And then there’s the kind of another deeper level of what. You know, say a therapist might be using. UM and on one hand, I think that it does bring up a lot of issues around like are we now outsourcing that kind of real human emotional work to an AI that we I think that all of these bring up real questions and anxieties about what it means to be human. And so I think that that’s that’s one part of it. But again at the same time is it knowing how influential these technologies are. Where it would it be better to have an AI chat bot that is not emotionally intelligent at all, and what would that look? What would that look like? Right. And so I think these to me again, these are the really kind of thorny, challenging issues which there’s no real easy answers to my belief and. Then this is really why we do this work at AI. For all is that I have, you know, I always say to folks, I have my own ideas around what I think we should do as an industry and what I think needs to change. But ultimately I really believe that what is going to make the biggest difference and really help us head in the right direction and answer these really thorny issues is when we have a large diverse. Ethically trained generation of AI technologists who are bringing their own areas of expertise, bringing their own life experiences, bringing those perspectives into the conversation, and are feel empowered to do so. Are in positions where people will actually listen to what they say. You know we have we have this, we call the future Forum dinner series. We have these kind of salon style dinners with advisors and partners and and some of our staff as well as our students and we bring up a big issue around. You know what the future of AI is going to look like as we’re having generative AI and Gen. C really intersect? What’s that going to look like? And I’ve been in conversations where folks are talk. Thing about Gen. Z and STEM education and that phrased my hand. And said. This is a great conversation, but there’s no one from Gen. Z in the room, you. Know what do. They think. And so we we we make sure to bring our students into the conversation and keep their their perspectives at the same level as some of our, you know, sea level partners at top AI companies. Because to me, their opinions are just as valid and I’m a really huge believer in that kind of if you will, collective intelligence of our our real human net. Work coming together on these issues because that that that question of, you know, is emotional intelligence being built into a chat bot, positive or negative. It’s a really thorny question without a clear answer right now, but I think that’s the way that we’re going to be able to get to better answers. 

Hessie Jones 

Yeah. No, I agree with you. And I think that the the thing that’s really hard about AI and I think for people that have developed software, they know that you don’t ship something that that doesn’t 100% work. It has a different beast because it it it trains and it gets better, but it has to train on real data in order to get better. So you can’t kind of keep it locked in the box and assume that you keep feeding and other data to get better. It really. It’s it’s the unfortunate part of of being an alter, a different technology than than normal software, right? So I want to get into because you’re you’re talking about students that and I want to switch a little bit to education because there was a recent article. About a company called all here, and they’re an educational platform. That was, I think they’re hired by the Los Angeles Unified School District to actually build. A $6 million AI chat bot called Ed to help both students and parents navigate a lot of the educational resources and supplement some of the classroom instruction. So from all from, I guess, from the optics perspective, it looks like this is a really, really good thing. Not only will it help. Students, it’ll help kind of get them get the teachers moving to to a better level of instruction and help, I guess, reduce their I guess. Being overwhelmed not only within the classroom, which they have been for years, but also be kind of like the catalyst to actually enable a new type of instruction to get their kids moving in the direction that we want them to go. So apparently it failed and so that down the left. As well as the the employees which they furloughed, I want you to to speak to this idea of of education and AI in the classroom and. And situations like this that that have been trying to appeal to bring bringing not only like the technology but also the new wave of of how we do things into into the education system which we know has far been. I guess it’s archaic. 

Emily Reid 

Yeah, I think that’s, I think that’s a fair, fair way to put it because there is a. You know, it’s it’s so difficult for some of these really large institutions to change and there’s certainly some, some reasons why we may be want to be, you know, thoughtful and methodical in that process. But there’s also times when it can just really kind of create a a really challenging. Environment for innovation and and I do think that that you know the way that I think about change and innovation is around opportunity. I think it’s in those moments of change and innovation and when things are kind of shifting, the ground feels like the ground is shifting underneath you. That’s actually when I think we have the most opportunity to. Change systems for the. Better because it’s really hard to change systems when they are really firm and and, you know, immovable objects. And so there is, you know, when I’ve heard about this. Story there’s a couple different things that came up for me. One in terms of the what, what sounded like a. Relatively ambitious project for this for this new organization, you know from from what I from what I knew about the story, it did read to me a bit as something that. Was maybe kind of a victim of the hype cycle of AI. We know that technology goes through these hype cycles. There are periods when we get really hot on a particular technology and has gone through this a number of times and then we go through an AI winter where some of those those ideas. Don’t necessarily bear out and there ends up being a lack of investment in both the private sector and public sector. So I think that that that kind of like both the the business investment and the like scientific investment and kind of R&D in those different technologies, it goes through these cycles. And to me it read a little bit as something that. There probably was a lot of hype and excitement that then really overshot what was possible to do in that period of time and. You know that that I think is going to create a lot of challenges for a lot of these AI companies. Of course, we’re going to go through kind of these, you know, boom and bust cycles. But the thing that worries me the most is kind of who gets lost in that process. I do believe that overtime technological. Action tends to actually create more jobs, and it gets rid of, but it does get rid of jobs and there is a lot of reshuffling in the short term, and it’s the folks who are more marginalized in society that are going to experience the the worst of that, and they may not necessarily be. Trained up or have the social safety net that they need to manage those periods where the jobs are shifting around. So so those are a couple of the the things that come up for me on the industry side when I read that story, but also thinking about the education. Sorry I am a huge believer in the need for AI literacy, and by that I don’t just mean students being able to being competent in using AI tools. I think that that’s something that this generation, as long as they have access to the Internet, which can be its own. Question. But if they have access to the Internet, they’re going to have access to these tools and they’re going to become, you know, more expert on them than than I am. So that part, I believe that they’ll be competent users really quickly as long as they have Internet access. That’s not necessarily the same as being able to to understand what is really underneath the machines and what is really inside of these algorithms, and understanding being able to be real kind of informed citizens or informed users of these technologies, that doesn’t necessarily mean you need to be. Teaching machine learning to kindergarteners. But I think that there is actually a way for us to teach the fundamentals of artificial intelligence in a way that. Is. Not dumbing it down is it is really honest about what the technologies. Are what the risks, risks and benefits are and allow for an educational system that is helping students to, at a minimum, become real, informed users of these technologies and understand a little bit more what’s happening behind the scenes? I think in in sort of pre AI world I compared this. Or kind of pre AI focus and and thinking about computer science more broadly. I had compared this to, you know, the kind of Mavis speaking, typing classes and math blaster that. I did in my computer class in elementary school as opposed to learning how to program right and those are two really different things. We can be users of technology. Great, but how do we actually become the creators of it? Or it be able to be informed enough to influence it? So to me that all starts with AI literacy. You don’t necessarily need everyone to become. A machine learning engineer, but even getting more diverse and more diverse group of machine learning engineers is going to have to start with more AI literacy. Because if you don’t even have that kind of point A for. You end up having more and I, you know, have seen this so much working across the high school to college, to early career space is that if you don’t have really broad AI literacy, really broad computer science education, then what happens is that the. Students who end up studying that in college. Maybe you only get the students who were the the top math and science students, or you get a lot more students whose parents were already in the industry and have been around it at home. And so in order to really make that a much more diverse space, we have to be able to start at more of a level playing. Field and so it brings up a lot of those challenges. When I when I read this story is both, you know, the that that kind of like hype cycle and what that’s going to mean in the industry space, but also what the what the role of AI tools in education is going to look like. 

Hessie Jones 

It’s it’s funny that you’re saying that that’s going to ask you a question about the importance of STEM, but then I’m. I’m starting to realize that the way you’re talking it’s it’s not really forcing people into STEM, but but creating a foundation. At the very beginning where they’re understanding the technology and what are the intricacies of it, you may not necessarily be be good in math. You may not necessarily be good in science, but at least know enough about the fundamentals of the technologies that you’re using to apply more. I guess a critical thinking. Perspective. And so if you think about it and maybe this is something that that you could also discuss like there is an importance of going still going into social sciences still going into the humanities and arts. So how does that all integrate as let’s say if I decided like I like AI, I like to use it as a user. I understand it’s important in how I navigate all my daily stuff. But you know, I want to. Be an artist. You know, I want. I want to help people. In in social science. So from that perspective that you’re saying it’s not, it’s not going to limit you if you if you apply some of the stuff that you were. Talking about. 

Emily Reid 

Yeah, absolutely. I think it’s a, it’s a really, it’s a really important example because. I think the in the computer science space, for example, a lot of the kind of workforce development and CS education work has been focused on. There is like one piece around air literacy, but then we’re talking more about how we’re developing programmers, computer scientists, machine learning engineers. Software engineers, to me, AI is really upending that entire system. And and I’ll give a couple examples of why I think that’s true. And I think folks have talked about STEM education for a long time and I’ve I frankly, I’ve always had trouble with that particular acronym because I think that it actually ends up kind of obscuring what we’re talking about. It was often used kind of interchangeably with computer science. But actually none of those things. Math, science, technology, engineering, or math are exactly the same thing as computer science. So right, so, so they’re they’re it really kind of obscures, I think can can run that risk. UM, and now even seeing kind of computer science based versus AI space, there are things that are really different and unique about AI. That’s one of the things that really brought me into this particular organization. When I first joined, because there was a sense of there are while AI could certainly be viewed as. A sub discipline of computer science. There are some real kind of unique qualities around AI that are that are different and that I think really end up being highlighted when we’re talking about the practical impacts in education and workforce. So one example. I during those tech layoffs we talked about in addition to having, you know, friends who worked in the DI space at tech companies experiencing layoffs, I also had friends who were software engineers who were getting laid off, and not just because the company was potentially shrinking. But also because, as you know, in the sort of later phases of that, as more generative AI tools were coming out, it was actually replacing software engineers. In some cases, you know, our conversation around job replacement and AI. I years ago was more focused on concern around blue collar jobs and what we’re seeing now with generative AI is there is a displacement in white collar work for lawyers, for content creators, for graphic artists and for computer scientists and software engineers. If I think in that particular the sort of asterisk I’ll put there is. If they don’t have a a AI skill set that they can actually continue to apply in their work. So I when when we think about AI, AI education? While the computer science world is is changing and shifting, I think there were the kind of concept for a lot of the early computer science organizations education organizations. Was that OK? Having a degree in computer science will give you kind of a ticket to a great career path. And again, I I think that is true when you put the asterisk on it that it’s including AI skills, but at the same time, computational thinking has actually become more important for everyone in every industry, because every time what AI, what generative AI in particular has done. Is it’s turned all of us into programmers. We’re just programming in natural language. So when you go to, you know, chat TBT or you go to Gemini and you’re creating something, I’ll have friends who describe, OK, maybe they want to, you know, there’s some content. The social media that they’re creating or there’s some sort of image they want. To create they have. To go through a number of iterations and they have to give it feedback and they have to test it and run it again. And I say, Oh yeah, you’re debugging your code, your code is just in English or natural language of choice. Right. And so I think that it has really, you know, in computer science we talk about the kind of layers of abstraction. It’s allowed us to now, you know we were we were once really programming practically in in binary and then we built compilers and then we had things like Python And we kind of went further and further up. And now we have. We can program in our natural languages so that kind of that is really a shift in all roles. And I really I do, I sometimes hesitate around this because I don’t want to slip into to the kind of hype. Side but that quote from General Medi around this is going to change 100% of jobs. I think there’s really really something to that because one of the one of the examples that folks have given and I think it is a pretty valid comparison is is this kind of AI generative. In particular, explosion going to be similar to the the onset of the Internet and broad access. The Internet and there were folks in the early 90s who were saying, oh, this is the Internet thing is a fad, and we’re going to, we’re going to kind of come back to how we used to do business. And obviously that wasn’t true. And I think that it’s possible that AI is, if not to that level, certainly within that order of magnitude. So I to me that creates an environment where, yes, artist. Lawyers, social scientists. I know a lot of folks who work in computational social science and have been the lab that I actually worked on worked out in grad school, was really focused in that space. And you have folks who are social scientists and machine learning engineers coming together and working on some really interesting. Problems. And so I think it’s going to become more of just. A way that we all work. And so that real being able to bring that AI expertise into other spaces is a really different story than than years past, where this was something that, you know, only really lived in the most advanced labs and and some of the top research institutions. It’s going to be something that we all are touching in some way. Regardless of what role we may be in. 

Hessie Jones 

Yeah, it’s interesting that you say that because that, I mean, when we talk about AI and potential displacement now, you didn’t say that Gini Rometti said that people will be displaced. You said that AI will change everything. And I I listen to a lot of. Artists and I listen to a lot of writers who you’re afraid that. AI’s going to get good enough that they don’t need me? But I said, you know, writers and and people who have a specific craft have an advantage over others who don’t, and that they may be able to use their natural language programming to up level their skill. So they may not necessarily, right. In scratch, but they will become super editors and. They will go. Faster than those who haven’t written like a stitch of an article in their whole life. Right. It’s the same with artists. Maybe it’ll elevate everyone. To a new level that they’ve never hit before because because it brings on a new type of efficiency and obviously a new type of adaptability. 

Emily Reid 

Exactly. Yeah, I think that there is a. I guess what one of my real kind of core philosophies around technology is that it it should be. We should be looking at it as a tool and as a means to an end and not the end in and of itself. And I think that’s one of the things. That the philosophy that I think prevail tends to prevail in a lot of Silicon Valley companies and tech world is, you know, the technology has this end goal in advancing this technology and and I think that’s where to me some of the personally my opinion is that like the obsession. With AGI is incredibly displaced. To me, this really should all be around. This is a tool for what we the the goals that we have as a society, as communities, as individuals. These are tools, really wonderful tools that we can use if only we have. Access to them and and if only they are being developed by a relatively representative group of people, I agree. I think there’s a lot of potential for innovation in those in those other spaces. And I will say I have. I have friends who say our graphic artists and are really concerned because some of the work that they would be paid to do previously I can do in Dolly, you know, or maybe not as good a version, but I can do a quick version in Dolly in a couple seconds, right? And that does cause a real issue for them in the short term, but I agree. I think if we are. Because we sometimes get into this mindset of thinking about the technology as the end goal. We missed some opportunities around, well, maybe this is really just something that’s more assistive, there might be some areas where this is this is you know the real end goal. But you know I’ve heard a lot of folks talk about it as it really creates a get getting rid of the sort of blank page problem as. A writer, if you need to write a report of some sort, you know within all of the ethical frameworks of whether you’re doing that in a in a company or a school. But I think it does really remove a little bit of that kind of rough draft, you know, kind of first pass blank page issue. Because it gets us started with something. As you said. You can then be editing. You can then be in kind of a different role. I think that the I I was really interested when you know, SAG went on strike and actors and writers went on strike. And I think that there’s some really interesting stories and lessons there of how. You know human, the human beings at risk right now can really use some of their own collective bargaining power to to really force industry to come up with processes, policies that is are going to be more fair because I do think that there is a real. There is a real benefit to using AI in particular areas, but we don’t want it to be. At the risk of some of that real human creativity, especially when a lot of the AI creativity is is has been trained on those artists, right? It’s a lot of the tech, you know, I can I there’s been a lot of stories around that, but I could go into one of the image generation. And tools and say, you know, develop a, you know, a a new logo for for all in this artist style, right. And and some tools will kind of say, oh, we can’t do that and others will just go ahead and create it and there’s no real kind of strict framework. And so all of those. All of that kind of human creativity, some of which artists kind of either gave up, you know, may have kind of given up for free. Those tools are have been trained on and are charging others for, so I think we really need more more activity and we really need those voices to be heard in order for us to figure out what are the economic frameworks that are going to be fair here, given that there are. There is this the kind of value that artists have already created, that they’re not getting the benefit from and some of for some of these paid. Tools in particular. 

Hessie Jones 

I agree with you, it seems it seems like we’re at an inflection point where you know government, you know, policy artists, everyone. This is an interdisciplinary problem that we. Need to solve. 

Emily Reid 

Yeah. 

Hessie Jones 

Because it it’ll hit all of us at some. Point in time. Somebody told me, you know, maybe AI is the biggest equalizer and that it that it will do everything better than that normal human being. And so we bring in universal basic income. And I said, you know. I do not want that to be a default, even though even though it it may be true at at some point in time, there will be jobs that will be displaced and you cannot replace them if unless they upskill themselves that that has to be done. But I apologize, I actually want to talk to you for another. Hour or two hours, but I can’t. Because we have opinions. 

 

Thank you. Thank you so much, Emily, for coming and I look forward to more discussions like this as we. As you know, AI develops further. I want I want to see what kind of milestones that a fall is actually making in the next in the coming years. 

Emily Reid 

Thanks. Thank you so much and I’ll just I’ll 11. Last note that I’ll, I’ll, I’ll leave our conversation on this has been wonderful. Thank you. Again, I know we went, I could keep going too. So we’ll we’ll just we’ll have to do it again another time. But what you just mentioned there, you know I I I had a conversation with. A colleague who works in one of the. The major generative AI companies, and he was making a, you know, an argument that there this is kind of potentially a great equalizer, creating A level playing field. And again, I would I think this is a good place to use that argument around the Internet or that comparison to the Internet is we that was a bit of an argument at the time. For the Internet as well, that it was going to be really a great equalizer, it was going. To kind of. Create actions today we still don’t there are still folks who don’t have Internet access. And the other thing that happened is, is that sort of early those early days of the Internet, it was much more of the Wild West. And there were. Those bulletin boards, and there was, you know, a really kind of different kind of environment. And then you have, you know, more companies getting into the space, economic consolidation happening. And now the Internet is really influenced by. Like 90% of it is influenced by four or five companies. So that to me, I think that what technologies like this do is they create the opportunity for that to be the case. But they are ultimately going to end up working within this. The other systems that we have, the other political systems, the social systems, all of those systems that we live in as human beings, it’s going to adapt to that unless we are thoughtful and strategic and. And and and really kind of active and hopeful about what we can change there and how we can use those the opportunity with these technologies to change some of those systems. So again, I think that there is a lot of hope for where we can have AI go, but so much of it is going to be dependent on what we choose to do. So thank you, Jesse. 

Hessie Jones 

I have hope for the next generation, so I I tell my kids this all the time. You guys are going to change the world and I hope and I hope it. It allows us all to live a happier life, right these days, it’s hard to question that. 

Emily Reid 

Thank you. 

Hessie Jones 

Anyway, thank you. Again. So for our audience, thank you for joining us today. If you have topics that you want us to cover, please e-mail us the communications with our accelerator tech uncensored is produced and powered by ultimate Accelerator. You can find it. My name is Hessie Jones, and until next time have fun. And stay safe. 

Host Information

Hessie Jones is an Author, Strategist, Investor and Data Privacy Practitioner, advocating for human-centred AI, education and the ethical distribution of AI in this era of transformation.

She currently serves as the Innovations Manager at Altitude Accelerator. She provides the necessary support for Altitude Accelerator’s programs including Incubator and Investor Readiness. She will be the liaison among key stakeholders to provide operational support and ultimately drive founder success.

LinkedIn

You can also listen to this podcast on Spotify.

Please subscribe to our weekly LinkedIn Live newsletters.

The post Emily Reid, CEOAI4All: “AI Will Change the World. Who Will Change AI?” appeared first on Altitude Accelerator.

]]>
Will LLM Adoption Demand More Stringent Data Security Measures? https://altitudeaccelerator.ca/will-llm-adoption-demand-more-stringent-data-security-measures/ Mon, 10 Jun 2024 14:21:42 +0000 https://altitudeaccelerator.ca/?p=136802 by Hessie Jones The rise of large language models (LLMs) has significantly changed how we communicate, conduct research, and enhance our productivity, ultimately transforming society as we know it. LLMs… Continue reading Will LLM Adoption Demand More Stringent Data Security Measures?

The post Will LLM Adoption Demand More Stringent Data Security Measures? appeared first on Altitude Accelerator.

]]>
by Hessie Jones

The rise of large language models (LLMs) has significantly changed how we communicate, conduct research, and enhance our productivity, ultimately transforming society as we know it. LLMs are exceptionally skilled in natural language understanding and generating language that seems more accurate and human-like than their predecessor. However, they also pose new risks to data privacy and the security of personal information. Compared to narrow AI systems with LLMs, more complex issues like sophisticated phishing attacks, manipulation of online content, and breaches in privacy controls are emerging.

A recent study by MixMode analyzed data from the National Cyber Security Index (NCSI), the Global Cybersecurity Index (GCI), the Cybersecurity Exposure Index (CEI), and findings from Comparitech to assess cyber safety across 70 countries. Findings indicate that countries with the most robust cybersecurity infrastructures include Finland, Norway, and Denmark. The United Kingdom, Sweden, Japan, and the United States also maintain strong defenses against cyber threats. While the USA scores highest on the Global Cybersecurity Index, it only ranks ninth in overall safety. Similarly, Canada, with a strong Global Cybersecurity Index, ranks tenth overall in safety.

 

MixMode Global Cybercrime report 2024

MixMode Global Cybercrime Report 20204

Worrisome are the countries that pose the highest risk for cyber-attacks. The threat exposure in these emerging countries includes the economic impact of potential financial losses for both public and private sectors, the threat to national security systems, particularly their vulnerability to cyber espionage, and attacks on critical infrastructure and national safety. These countries, more prone to data breaches, face the exposure of highly sensitive information, leading to identity theft, financial fraud, increased sources of cybercrime, and eroded investor and consumer confidence.

MixMode Global Cybercrime report 2024

Global Cybercrime Report 2024

With the rise of LLM adoption, the U.S. Biden administration reauthorized Section 702 of FISA into law in April, extending warrantless surveillance and impacting U.S. civil liberties amid widespread data collection concerns, further complicating the government’s role in creating safeguards and ensuring trust around artificial intelligence. 

To discuss the vulnerabilities associated with large language models and the ramifications of the new law on individuals’ data privacy rights and civil liberties, especially concerning emerging tech companies, I met with two experts: Saima Fancy, a data privacy expert and former privacy engineer at Twitter/X, and Sahil Agarwal, CEO of Enkrypt AI, an end-to-end solution for generative AI security.

The Perpetual Demand for Data

Between 2022 and 2023, there was a 20% increase in data breaches. Publicly reported data compromises rose by 78% year-over-year. The average cost of a data breach hit an all-time high last year at $4.45 million, marking a 15% increase from three years ago. Notably, 90% of organizations experienced at least one third-party vendor suffering a data breach. Globally, the number of victims in 2023 doubled compared to 2022.

Saima Fancy explained that the driving force behind these issues is the intense desire for data collection by organizations, often resulting in reckless behavior. Technologies like OpenAI, she indicated, are launched prematurely to maximize data collection. “These technologies are often released too early by design,” she noted, “to accumulate as much data as possible. While they appear free, they’re not truly free because you’re providing your personal data, essential for training their models.”

She added that organizations could have opted to legally acquire data and train their models in a structured manner. “Many tools were ready but weren’t launched immediately because they were undergoing rigorous sandboxing and validation testing,” she explained, noting the rush to release new technologies isn’t always deliberate but often fueled by enthusiasm and the pressure to innovate, which can lead to unintended consequences. “There’s a race to release new technologies, which can inadvertently cause harm. It’s often just a case of ‘let’s release it and see what happens.'”

Fancy also highlighted that many of these tools are rarely stable in their nascent form, and developers will fine-tune the models over time. “This means the initial outputs might not be accurate or what users expect. Despite this ongoing learning phase, these tools are already live on a global scale,” she added.

LLMs and the Indiscriminate Scraping of PII

Given that LLMs have been developed from data gathered indiscriminately across web properties there’s a risk of exposing sensitive details such as credentials, API keys, and other confidential information. Fancy believes that society’s susceptibility to security threats is unprecedented, observing, “The public is undoubtedly more vulnerable now than ever before. We’re in the fifth industrial revolution, where generative AI is a relatively new phenomenon. As these tools are released publicly without adequate user education, the risk skyrockets. People input sensitive data like clinical notes into tools like chat GPT, not realizing that once their personal information is entered, they’re used to train models. This can lead to their data being potentially re-identified if they’re not properly protected.”

She emphasized the risk extends beyond individual users to corporations as well, particularly if employees are not properly trained in using these technologies, including prompt engineering. “The vulnerability is extremely high, and for corporations, the risk is even greater because they risk losing public trust and they are not immune to data breaches. As these tools evolve, so do the techniques of malicious actors, who use them to refine their ransomware and phishing attacks, making these threats more sophisticated and costly to mitigate.”

Today we are witnessing the swift emergence of regulation through the EU AI Act, the most comprehensive legislation on AI and the Biden Executive Order on Safe Secure and Trustworthy AI in late 2023. Sahil Agarwal, CEO of Enkrypt AI, points out, “Since the introduction of ChatGPT, there has been a significant increase in awareness among the public, legislators, and companies about the potential risks of AI. This heightened awareness has surpassed much of what we’ve seen over the past decade, highlighting both the potential and the dangers of AI technologies.”

He adds that the regulatory mandates are clear and if you’re handling customer data or distributing tools you need to be mindful and ensure these tools aren’t used for harmful purposes. As well, the penalties don’t randomly apply to any startup working with generative AI technology, instead he continues, “ …they’re targeted at specific stage of a company’s development and certain types of general-purpose AI technologies, emphasizing, they’re there to guide more responsible innovation.”

Cyber Attacks are More Advanced

Today’s AI technology can enable highly convincing phishing emails or messages that may appear legitimate and trick individuals into revealing sensitive information. In addition, LLMs have enabled attackers to conduct more effective social engineering attacks by generating personalized messages or responses based on more extensive knowledge of the target’s online activity, their preferences and behaviors. We’ve also seen a rise in malicious content as LLMs can be used to generate fake news articles or reviews, which, in turn, can spread misinformation, manipulate public opinion or propagate malware. The speed at which LLMS can also automate certain stages of cyber-attacks by generating targeted queries, crafting exploit payloads, or bypassing security measures, is making it increasingly difficult to identify these before they’re implemented. In addition, the rise of adversarial networks can now manipulate the inputs to generate outputs with intention to deceive spam filters, fraud and malware detection systems.

Fancy explained that both personal and commercial levels are affected and that many systems are vulnerable due to outdated technology or inadequate security measures during transitions to cloud or hybrid systems. Startups often overlook security and privacy measures, which is a clear oversight, revealing, “From a privacy-by-design perspective, being proactive rather than reactive is crucial. Starting with security measures early in the software development life cycle is essential to prevent sophisticated attacks that can severely impact institutions like banks and hospitals.”

She emphasized the naive thinking many startups have, believing they’re immune to attacks adding, “But what LLMs have done is dramatically broaden the attack landscape, making it easier for malicious actors to observe and eventually exploit vulnerabilities in a company’s systems. It’s crucial for businesses, especially startups, to prioritize funding for security measures from the outset to protect against these threats. Regulations are also becoming more stringent globally, which could have severe repercussions for non-compliance.”

Agarwal revealed his team has uncovered some serious safety issues and vulnerabilities with the top LLMs and agrees that perhaps these models are not ready for prime time. “Traditionally, cybersecurity and standard Data Loss Prevention (DLP) solutions have focused on monitoring specific keywords or personally identifiable information (PII) exiting the network. However, with the advent of generative AI, the landscape has shifted towards extensive in-context data processing.” He states that DLPs now should be tuned to the context of activities or potential security threats occurring, rather than just monitoring data outflows.

Agarwal added the need for safe adoption of these updated AI systems has yet to develop standard practices mainly because of the nascency of LLMs, “While many discuss the concept of responsible adoption, the specifics of implementing, measuring risks, and safeguarding against potential issues when using large language models and other generative technologies in businesses have not been fully established.

The Irony of AI Safety Amid the Reauthorization of Section 702, FISA into Law

The recent passage of Section 702 of the Foreign Intelligence Surveillance Act (FISA) into law has created this dichotomy that exists within the same U.S. government that has also mandated the development of trustworthy artificial intelligence. The law was established in 1978 to spy on foreign individuals, underwent significant changes after 9/11, allowing the broad sweeping of U.S. citizens’ information without a warrant. This practice, known as “incidental collection,” meant that if such information was obtained, it would still be allowed. Key events include the 2013 revelations by NSA whistleblower Edward Snowden, who exposed the PRISM program and the involvement of major tech companies like Google and Facebook (at the time) in giving the government unfettered access to user data. This bulk collection of U.S. citizens’ emails, mobile communications, and phone records was later ruled unconstitutional. Despite added provisions to increase oversight and minimize incidental collection, the law was renewed without amendments, resulting in more powerful surveillance capabilities than those considered during the initial AI hype eight years ago.

Fancy expressed the regrettable development of this renewal for companies that collect data and create models. “With laws like this being renewed for another two years, the implications are huge. Federal governments can compel companies to hand over data on a span of the population, which is essentially surveillance at another level. This is scary for regular folks and is justified under the guise of world state protection. It may seem necessary at face value, but it opens doors wide for abuse on a larger scale. It is unfortunate.”

While it may seem there is no way out of this mandate, companies like Signal and Proton claim they cannot access user data due to end-to-end encryption technology, which allows only users access to their own information. Could this absolve companies of their responsibility amidst the renewed government compliance? Fancy acknowledged, “There are data vaults where customers can store their data, and the company only works with the metadata. These vaults are locked and encrypted, accessible only by the customer. Blockchain technology is another method, where users hold the keys to their data. These methods can protect user data, but they also limit the company’s ability to monetize the data. Nonetheless, they are valid use cases for protecting user information.”

Startups Need to be Proactive: Data Privacy and Security Should not be an Afterthought

Agarwal is adamant that innovation should not come at a human cost. He argues that startups creating technologies with potential adverse effects on customers need to integrate ethical guidelines, safety, security and reliability measures from the early development stages, adding, “We cannot adopt a wait-and-see approach, scaling up technology like GPT-4o to widespread use and only then addressing issues reactively. It’s not sufficient to start implementing safety measures only after noticing misuse. Proactive incorporation of these safeguards is crucial to prevent harm and ensure responsible use from the start.”

Fancy emphasized that many out-of-the-box tools are available in the market to help companies implement new security measures within their cloud infrastructure. Whether it’s a private or public cloud, these tools can manage data sets, create their own cloud classifications, categorize and bucket data for protection, and constantly monitor for infractions, loopholes, or openings within the cloud structure. However, she acknowledged that these tools are not readily affordable and require investment upfront as data is being brought in, whether unstructured or structured. As per Fancy, “It’s crucial for companies to know where their data is located, as many big data companies admit to not knowing which data centers their data resides in, posing a significant risk.” Fancy also pointed out the importance of encryption, both for data at rest and in transit. Often, companies focus only on encrypting data at rest, however she points out, “…infractions frequently occur when data is traveling between data centers or from a desktop to a data center. You’ve got to make sure the encryption is happening there as well.”

She highlighted that we are in a favorable time of technological evolution, with many tools available to enhance data security. While these tools’ costs are decreasing, it’s still necessary for companies to allocate funds for privacy and security measures right from the start. “As your funding is coming in, you’ve got to set some money aside to be mindful of doing that,” Fancy stated. Unlike major companies that can absorb the impact of regulatory actions, small startups must be proactive in their security measures to avoid severe consequences.

Startups, however, can initiate other measures to protect data privacy and security. Fancy suggests smaller startups should practice data minimization, ”and practice collecting data for the purpose that the data was collected. They can put robust consent management measures and audit logging in place that allows startups to manage logs effectively and monitor access controls to see who has access to the data.”

Fancy noted that there are healthy mechanisms and practices within data ecosystems that startups can adopt to enhance their security measures. She stated, “These practices can be a viable alternative for startups that cannot afford comprehensive security tools initially.” Startups could start with smaller features of these tools and expand their capabilities as their funds grow. By adopting these practical measures, startups can effectively enhance their data security and privacy protection without incurring significant initial costs.

Ethics is Now Mainstream

Where technology, compliance and brand risk collide is within the realization that the most compelling technology to date, brought to the masses by Open AI, the darling of large language models, heavily venture capital backed –is still flawed. Agarwal reveals when he recently spoke to a principle from private equity firm, they were experiencing a major scandal when articles they produced were leaked due to their decision to remove certain safeguards. He stressed, “This incident highlights that ethical considerations are not just about morality; they’ve become crucial to maintaining a brand’s integrity. Companies are motivated to uphold ethical standards not solely for altruism but also to mitigate brand risks. This need for robust controls is why topics like generative AI security are gaining prominence. At events like the RSA conference, you’ll find that one of the central themes is either developing security solutions for AI or employing AI to enhance security measures.”

Data Security Needs to be in Lock-Step with Advancing AI

The reauthorization of Section 702 of FISA underscores a significant tension between the pursuit of advanced AI technologies and the imperative of safeguarding data privacy. As large language models (LLMs) become increasingly integrated into various aspects of our lives, the potential for sophisticated cyber-attacks and the erosion of individual privacy grows. Saima Fancy and Sahil Agarwal emphasize the urgent need for robust cybersecurity measures, ethical guidelines, and proactive regulatory compliance to mitigate these risks.

Promoting AI innovation while ensuring data privacy and security presents a complex challenge prompting organizations to balance the benefits of cutting-edge technologies with the responsibility of protecting sensitive information. Ensuring AI safety, maintaining public trust, and adhering to evolving regulation and security standards are essential to responsible innovation. It’s early day but we’re already seeing organizations take the crucial steps to prioritize security and ethical considerations to foster a safer and more trustworthy world.

Please note this article originally appeared on Forbes

The post Will LLM Adoption Demand More Stringent Data Security Measures? appeared first on Altitude Accelerator.

]]>