Regulating the Future: AI Policy Challenges and Opportunities
- jared2766
- Jul 16, 2025
- 32 min read
Navigating the Intersection of AI, Policy, and Social Good: Insights from the Capstone Podcast
Introduction
In a recent episode of the Capstone Podcast, host Jared Asch engaged in a lively conversation with AI experts Michelle Neitz and Chris Brooks from the University of San Francisco. They explored the evolving landscape of artificial intelligence, the ethical and policy challenges it presents, and its implications for social good. This insightful dialogue offers valuable lessons for those looking to understand how technology intersects with government policy and how stakeholders can collaborate to navigate these changes effectively.
Exploring AI and Its Implications
Artificial intelligence is more than just a technological trend—it's a transformative force with wide-ranging applications, from autonomous vehicles to chatbots like ChatGPT. According to Chris Brooks, public consciousness has been particularly drawn to large language models for their ability to mimic human conversational patterns and generate content. However, as these technologies disrupt traditional roles, they present challenges and opportunities for various sectors, including education and government policymaking.
Selling Into Government: A Strategic Approach
Jared Asch shares an intriguing perspective on the AI investment climate, noting that companies increasingly label themselves as AI-focused to attract venture capital. This trend underscores the importance of understanding AI's transformative capabilities when selling into government sectors. Companies aiming to win government contracts should focus on demonstrating how their AI solutions can enhance efficiency, improve public services, and be developed responsibly.
1. Understand Government Needs: Successful companies are those who have real evolved use cases that can address specific challenges within government operations, such as improving administrative efficiency or aiding in policy development.
2. Highlight Value Propositions: As governments might not need to become giants like Google, companies should emphasize the ability to provide meaningful impacts on a smaller yet significant scale.
3. Leverage Regulatory Insights: Understanding the regulatory frameworks and having a strong grasp of the legal landscape is crucial when proposing tech solutions or policy framework approaches to government clients.
The Role of Government in Technology
As technology evolves, the role of government in regulating it becomes increasingly important. Guest Michelle Neitz points out the necessity of developing a legal framework that balances innovation with consumer protection. Governments must anticipate potential harms and craft policies that protect vulnerable populations while promoting ethical standards in tech development.
Emerging Tech and Policy Frameworks
Brooks and Neitz stress the importance of a multi-stakeholder approach to policymaking, especially in education and data privacy. Policymakers need to work alongside educators, industry experts, and the public to craft policies that not only protect but also harness AI for the common good. Collaborative approaches can illuminate diverse perspectives and drive more informed decision-making.
Conclusion
The Capstone Podcast episode featuring Michelle Neitz and Chris Brooks is a compelling exploration of AI's impact on law, technology, and society. By reflecting on their insights, stakeholders can better navigate the challenges and promises of AI. As we continue to grapple with these technological advancements, it's vital for government entities, educational institutions, and private companies to collaborate and create frameworks that encourage innovation while safeguarding societal values. Whether you're looking to sell tech into government or understand the emerging policy landscape, this conversation provides essential guidance.
Hashtags
For the Full Transcript
Welcome to the Capstone conversation where you learn about what's happening in the Greater East Bay. I am your host, Jared Asch.
Today we're gonna talk about ai. Where is artificial intelligence going? And what should we know from a policy perspective out there? We are joined by two worldwide experts on the topic. Michelle Neats from the University of San Francisco, the Center for Law, tech and Social Good.
She is a lawyer and a policy expert, and Chris Brooks, who is a computer scientist and professor through the center as well.
First, as we move forward, why don't you tell us a little bit about yourself?
Michelle, why don't you go and then we'll let Chris go.
I've been a law professor since 2006. I started the Center for Law, tech and Social Good as the original name of the blockchain law for Social Good Center. And the reason I started the center is because I have been an ethicist for most of my time as an academic.
When I learned about decentralized technologies, I wondered what are the ethics behind this? Who do you arrest if there's a problem? And so I dove into emerging tech from an ethics perspective and started doing government trainings in 2022 through the center. Because government officials really want to know how the technology works, but there are very few academic style trainings for them to take.
And so the center started to grow, and I moved to USF in January of 2023 because I wanted to do interdisciplinary work with computer scientists and business folks.
So now the Center for Law Tech and Social Good has grown. It is a very interdisciplinary center. We also have international affiliated faculty at the center. So we're doing a wide mix of different perspectives on blockchain and ai and even quantum computing because we're trying to help both law students and lawyers.
And lawmakers understand the issues around these technologies from a legal perspective.
And can you define for me what is social good?
Yes. Actually, it's funny you ask that. That's my summer research project because I get asked this all the time. I'm actually building a taxonomy for the term social good.
So social good is one of these very vague terms that can mean different things depending on where you come from. In my mind, the term social good means. Anything that benefits society as a whole, which includes vulnerable populations, marginalized populations, and people who do not ordinarily have a lot of power.
I'm really concerned about making sure that technology develops in such a way that we're including the people who may not be at the development table. And that policy makers understand that when they're regulating this, they do need to think about it from. The perspective of those who may not have power.
Now this doesn't mean I think we shouldn't have innovation, and in fact, much of innovation comes from allowing people to break into technology who may not have a Stanford PhD or a Harvard PhD. I wanna make sure we keep the doors open for innovation to all populations.
, My name's Chris Brooks. I'm a professor at USF and I'm a computer scientist. I started studying AI and machine learning in 1993, i've been at USF since 2002. I came there after getting my PhD at University of Michigan.
I've been broadly interested in AI and machine learning particularly problems that arise when you have multiple different agents or individuals learning at the same time. Sometimes those might be humans, sometimes they might be autonomous agents, sometimes they might be both.
Economics has been thinking about that for a couple hundred years, so we're able to. Steal from them. Like Michelle, I'm also really interested in the idea of technology as a democratizing force. I think one reason she and I get along really well is, while she's coming at this problem from a policy point of view,
we use technology to enhance social good. I come at it from an engineering perspective. That is how do you teach designers and creators of technology to have those values when they're setting out to build something so that, when I'm creating that new search engine or that new large language model or whatever it might be I'm not just thinking about the 1% or I'm not just thinking about those early adopters, but I'm thinking about the long tail effects and who it is in the world that this is actually gonna help and how am I choosing the right problems?
I. To solve in the first place. I think USF in general has these Jesuit values of the common good, as Michelle said, and wanting to really make the world a better place for people. And so that's where I'm coming from.
Chris, let's start. Define what is AI and why now
Great question. I've been working on this for a long time and it's been a field since the fifties. And so part of the challenge is that it's not one thing. And I think this makes it really hard for the media to talk about AI and what it is because often we think of chat GPT, that's the one that's come into the public consciousness the most.
And ChatGPT is an example's called a large language model. But there's this whole other suite of technologies that have all really come to fruition in the last few years from autonomous vehicles. To rovers on Mars to all the data that's collected about us all the time and used to, send me a traffic ticket or figure out whether or not I'm eligible for a mortgage or things like that to all the different kinds of translation technologies and recommender systems.
My point, is that there's this whole host or constellation of technologies that it is challenging 'cause they don't really work the same way, but they all have the same flavor that they're solving problems. That if a human was doing it, you would say, yeah, that's a hard problem. That's one I'm happy to have somebody do for me.
And they're all impacting our lives in ways by displacing human labor which is something we can talk about more later. But it makes it tough to talk about because again they're all very different. But I think it, it's chatt PT and the large language models that really caught people's attention because they've been so transformative and they impact the lives specifically of people who are shaping media narratives, right?
Writers and creators we're being directly affected by this. Factory automation's been around for 20 years, but blue collar auto workers don't write. Media columns. That hasn't gotten the same press that chat GPT has gotten.
Maybe 'cause chat, GPT is writing those articles for the reporters,
but certainly,
That's not hurting. For the average person or city council member that's listening to this. What is AI for them versus that automation in the factory? What is it today
I think it's this confluence of different technologies, right? You have things like LLMs, large language models that are able to probabilistically generate text or summarize text, or have a conversation with a human in a way that feels really intelligent and it's displacing some sort of human effort.
That's similar to a Waymo that's. Navigating streets, figuring out how to interact with customers and displacing human effort, right? Which is similar to a factory that's automated, that's doing something smart by welding a door onto a car, displacing human effort again, right?
I think the thing to look to is. Is there something that a person was doing that if I would point to it, I would say, yeah, boy, that required a person to do it. That couldn't have been a cat doing it. And now we have a machine doing it. Intelligence is kind that same way. We know it when we see it. AI is machines doing things that I would say, yeah, that took intelligence to do.
And you referenced like in the newspapers and the storylines, particularly around chat GPT and some others, but there are, every investor conference I go to, people are talking about investing in this.
It's almost like it, yesterday I was company a b, but today I am company AB that does ai. Yep. And all of a sudden I'm gonna get $25 million more in my capital raise than I did yesterday. Why is that? Why is it so intriguing? Help us understand.
That's a great question and as you mentioned, it's a little bit like the early days of the internet where I.
Everybody, nobody wants to be the one who missed out on the big AI company, right? There's a lot of venture capital money going into this world. Remains to be seen. Whether this is a speculative bubble or whether, like the early days of the internet, there's gonna be some winners and losers.
I think it really is a transformative technology. It, in particular, a lot of people are really interested in all the cool things you can do with a large language model. In essence a large language model is what's called a transformer. It takes a string of input and it gives you a string of output back out.
And that input maybe is an English sentence and it gives you a Spanish sentence, or maybe the English, the sentence is, write me a piece of code and the output is a huge program. Or maybe it's make me a cat picture. And the output is a cat picture. What's been really impressive is. Humans figuring out really clever ways to use that.
Oh, I can transform this input into this output. And that turns out to be a super useful hammer to hit a whole lot of nails with. Will it all pay off? Some of these folks are gonna get very rich. Yes. And a lot of 'em are not, I think.
But nobody wants to miss out. I think that's where the hype is coming from. But there is real. It's not just hype. It, this is a real technology and it really is transforming things. I think the secret sauce is gonna be like, what are the real effective uses? What are the ones that are really gonna have value attached to them?
And where there's really gonna be a value proposition, and it's not just, the pets.com of 1990. Three, although, but sorry, I always have to bring that up. 'cause we always made fun of pes.com back in the nineties and I get stuff from Chewy every day now. What thought was a big failure? The problem there was just Chewy didn't have a logistics chain or pest.com didn't have logistics in 1993, but in 2024 that totally exists.
It's putting something, not just the name and.com next to it. It's creating value. And I'm working with a lot of govtech companies, selling them into government they don't need to become the size of Google or something else, but they could be, they could get a hundred clients, 200 clients, and sell for $50 million within a three year trajectory.
And I think a lot of them are actually planning that more than becoming the big people.
Agreed.
Okay, so how do we know, and this goes maybe more into the policy but if both of you can weigh in, how do we know if AI is correct? Like right now, I was reading something to my kid yesterday about my 10-year-old, about swim rules for her swim meets, and I was looking at the Google AI answer to this question on turns and I was like, whoa, how is this right?
Does this apply to my league? But let's start with that,
chris, you go for the technical. I'll talk about the legal ramifications.
I'll give you super quick technical explanation, but I'm not gonna be very technical.
When we're talking about large language models, so the way they work in essence is they generate a probabilistic string of text. So you give 'em a starting point and then they try to guess the words that come after that.
There's way more to it than that but that's kinda the essence. In AI terms we say that it doesn't have what we call a world model. It doesn't have ground truth that it can fall back on that the basic vanilla LLM. And this is why it hallucinates. A lot of the work that people are doing now, folks like Gemini or a system like perplexity, is then trying to align what the large language model generates with some sort of ground truth, which is normally facts in a database.
Now this is not a new problem for us as information consumers, right? I remember having the same discussion when Wikipedia came out. How will we possibly, because humans can edit Wikipedia, how will we know that it's true? So I would say you don't know that what's coming out of an LLM is true any more than you know what's in the newspaper is true or any more than you know what's in Wikipedia is true.
You still have to be an informed information consumer vet your sources, do your research. Go back and double check things and be smart. You can't take that for face value any more than I would take what the New York Times prints on its front page is face value apart from, how much I trust its authority.
I think the dangers when we as information consumers start to blindly trust things without using our common sense. That's the risk with AI is it becomes really easy to just trust whatever you read.
I'll follow up with ramifications for it. You don't have to look very far right now to find attorneys who have gotten in trouble or turning in briefs to judges with fake case citations in them.
And where this is so concerning is that. We are used to taking whatever's on the internet is true, and this has been a problem across the board, as Chris talked about for a very long time. A Facebook post may not be true. Wikipedia may not be true, but I think because these AI chatbots appear to have a lot of confidence in their results.
People just assume that it's true. And so one of the things that's coming up in the legal profession is that lawyers who are stressed out and looking for shortcuts are literally asking. A chat bot to write out a brief and then turning it in with a fake case citation, which tells me a number of things that anybody outside of the legal profession can learn from, which is number one, you have to double check it, right?
For example, I'm doing a big research project this summer. I have two research assistants. I have no problem with them starting from the point of AI to get a read of the landscape. That's it. That's what they're allowed to use AI for from there. Everything has to be double checked. AI will give you an interesting overview of the landscape.
Then you go and find the citations and read the cases and double check to say that the cases actually do indeed say that. So everybody who's used the chat bot knows that they can hallucinate, as Chris said. My very first query into an AI chat bot was, who is Michelle Needs? And it came back that I was a law professor at USF who teaches civil procedure evidence and another course.
This raises a lot of legal problems if you're using it at work, right? It also raises problems if you're using it personally, and we can talk more about what it means to just hand your data over to a chat bot that you don't really know where it's going.
But I do think that the idea that you're not double checking every single thing you read out of a chat bot is problematic. Looking to use LLMs needs to know we are not at a state where you can call this truth yet.
I think that's good. Let's dive into that. I was in San Francisco last year with Ted Deutsch, who represents the American Jewish Congress, and he was talking about.
His concern that these chat bots, these AI generated answers could lead to antisemitism, racism, that it's really about who's programming it and how do you possibly check that? Help me understand this. How do we, right? Is it bias just because whoever programmed it is biased? Does it pick up that bias?
Am I gonna see something different than somebody in Alabama might see? I don't know.
Yeah. Is the short answer there? There's kind of two threads, right? There's, do chatbots contain bias, and yes, all learning systems contain bias. But then there's disinformation, which I think is the bigger, when we start to integrate it with social media. And you have bad actors who are explicitly using these tools to generate.
Fake news. And that's happening today, right? Just go on Facebook or Instagram and you'll see tons of garbage that's been generated to mislead people. Usually, sometimes by random people, sometimes by foreign intelligence services but that's a real problem that these tools make it very easy to create your own narrative.
And then when you integrate that into the social media bubble where everybody is only getting the news that confirms their previous, their priors, now we have a real problem. We have a tool that's not being used for the, common good. It's being used to disrupt and to harm people.
And I would also add there are legal fixes to some of this that are not being used.
For example, AI develop reliability, which makes everyone in the space go who you know because unlike lawyers, developers don't have malpractice insurance. They don't have fiduciary duties. There's a lot of reasons that developers have a strong argument against liability. But develop reliability is something that would very quickly change this landscape, repealing Section two 30 to make social media companies liable, could also help to fix a lot of this.
There is also some legal fixes to this that we have not yet used.
Can you expand on that?
Sure. I won't get into section two 30 'cause we're talking about AI today and not the internet.
But if you think about it if I am using a chatbot that. Spitting back antisemitic or racist or bigoted information to me, and I decide that this is not okay. It's unacceptable and it has caused me harm in some way. Then under general liability rules, I should be able to go after. Legally the entity or person who developed the thing that caused me harm.
We're seeing a bit of this now actually with the suicides that have been happening, teenage suicides that have been happening, and you see the parents of these teenagers going after, legally going after companies. If the legal landscape around this were to change, it would shift things very quickly, I think, and make it so that people have to pay more attention to the biases of developers that are building these tools.
All right. But you're talking about Congress likely passing a law, regulating some of this, is That would be the solution
I have been in the emerging tech law space since 2017. I'm not waiting for Congress to do anything. There's a vote today on this stable coin bill that has been talked about for many years.
I teach blockchain law every fall. It's, we don't have a federal law about this, but yes, if we had a federal privacy law that would protect a lot of people if we had some sort of congressional approach to regulation, like the EU does, for example, a risk-based approach to regulation.
That would also at a minimum cause developers to pause and say, hold on, I wanna make sure that I don't run afoul of these rules. We don't currently have that in place. Some states are being much more proactive than others, but I think a lot of this is going to come through the courts.
And I do think that individual high profile cases, like for example, New York Times versus open ai. That's where we're gonna start to see a lot of shifting happening from a legal liability perspective
One of the things you talked about is states and states have typically led, and in fact they were often leaders on autonomous vehicle policy, which Chris had mentioned, but Donald Trump has in the their bill that no state can regulate AI for the next 10 years.
Thoughts on that then?
Chris, you wanna go first?
I'll let you start. Rampant corruption is like the two word phrase I would use.
Yeah. I have a lot of concerns about this but I will put them as a good lawyer Should into outline form. First, my first concern with this is the message that it's sending two states, which is, even though you are the laboratories of democracy, and even though we could learn a lot.
About which approach works best by letting states move forward and seeing which approaches fail and which approaches when we've decided to take away that experimentation ability. That's a big problem, right? Especially in emerging technology. You see it with blockchain states are taking different approaches to blockchain and crypto regulation, and we are learning from those different approaches.
To just remove that entirely is a real problem. My second concern with this is obviously consumer protection, right? As a state legislator, I'm just gonna let AI development run rampant through my jurisdiction and there's nothing I can do to help consumers who might be harmed by this.
This would leave it entirely to the courts. Because the idea that the federal government will be able to create some sort of legislative solution within the next 10 years. Maybe a decade, right? But it's certainly not coming out in a year, and that leaves this entire space unregulated, which is concerning.
My last quick point is just the industry thinks this is good for them and it is not. It is not good for them. They will suffer through an FTX Sam Bateman freed moment, just like blockchain and crypto did, and it will affect the value of their companies. It will affect the value of their utility for consumers.
This is a disaster for the industry, and I hope they realize that.
As an update to this, while the U.S. House passed the ban on state and local AI policy for ten years it was removed in the Senate and final version.
Yeah, I think you're right on Michelle. And it, you mentioned autonomous vehicles. I think that's a really interesting historical experiment that's going on. Because autonomous vehicles are cars, they're governed by all the rules and regulations that govern cars, and that's forced Waymo and Cruise and some of these companies to go a little slower and run into problems earlier and develop a safer product.
Waymo is great. Crews found early on that their technology was not up to snuff. Because, not because it's an AI technology, but because it was a car, right? So if you had that same level of scrutiny for something like Open ai, I think we would be in a better position. And I agree with Michelle.
I think somebody in the industry is gonna have some huge disaster at an FDX scale, and hopefully it's only as harmful as the FDX. And just costs somebody money and doesn't, have larger scale consequences than that. 'cause unfortunately our industry has been very much make the mistake and then try to fix it which has been okay so far.
But we're outgrowing that approach, like we don't get licensed, right? We don't have any sort of. Internal body that governs us. Back to Michelle's point earlier, I would look to my own software industry and say it's time to grow up a little bit and have some, a little bit more regulation.
And from a policy perspective, I could imagine that California may say to the feds like, we disagree, we're gonna pass our own laws,
we've already passed our own laws. We're really a leader. I say this as a proud native Californian.
We are definitely a leader in this space. I think we're trying to strike the right balance between promoting innovation and consumer protection. I do not think it's easy to do this. But if I really wonder if this provision will pass. I guess by the time this comes out we'll have a better sense.
Even some Republicans, like Marjorie Taylor Greene are now saying, oh, I didn't know that was in there. I would never have done that. I hope this is a theoretical conversation that we're having and not one that would actually come out. But if it were to pass in California, were to get sued.
Force the preemption. And so I think it would be an interesting case to watch, but I hope not to have to see it.
We can talk about the politics of that for a long time, but let's talk about a policy framework. 'cause you talked about how this could be good for the tech companies, what should local government be thinking? What should the state legislature be thinking? And even Congress, how can they allow innovation but allow reasonable, going back to the social good comment, right?
How do we build this policy framework?
You go first, Chris, it's good to hear from developers on this point.
Yeah. That's a, okay. I'll be brief 'cause I feel like Michelle's got the better answer here. But to me it's crafting policy, like policy is a tool to get innovators to do the thing, to innovate in the direction you want them to go.
If you incentivize particular kinds of behaviors, then you get people moving in that direction. We want more democracy. We want more consumer protection. We want technology that will really lift people up and not just make the super rich richer.
It's policies like that. It's transparency, it's things like, incur giving consumers the ability to control their own data. And there's tons of business opportunities in there. It's not like you can't get rich serving consumers. I think it's having the right voice to, I'm a game theorist and so it's all about incentives and pushing developers into the space that we want them to be in.
I'll just add, so again, many thoughts, but this is a short podcast. I think any law that's drafted around this has to have input from a variety of people, and I'm pleased to see that states are looking to working groups and things like that to try to get input from a variety of stakeholders.
The problem is that can take a long time. I say this as someone who was on a blockchain working group for California. It's in 2019. Like it takes a while for these things to come to fruition. And in the meantime you have people being harmed. So in my mind that striking that balance means ensuring that you're looking around the corner to see what harms could be coming, right?
And i'll take one example. One example is privacy. We have a lot of folks who are uploading information into chatbots, not having any idea what those chatbots do with that information, right? And when it's companies like Microsoft or Google or philanthropic companies that people may trust, they may be less concerned about that.
When it's a company like Deep Seek, which is an open source model that you can use and download, and yes, you can take privacy protections around deep seek. Many people do not recognize that is possible, and so they're uploading personal information in. Now, the California Privacy Laws are looking to try to prevent any sort of privacy violation by declaring that information that you upload is still your personal information and is still covered by.
The CCPA and the privacy laws in California. But this is a real problem. People do not recognize that they're uploading trade secret information from their company, for example, or that they're uploading, you shouldn't upload your tax return to ask how you could save taxes, these are the sorts of things that AI might be able to do, but you do have to think about what you're doing with your data and laws can help to make sure that folks are given that moment to pause around that.
Chris, from your side, what are the tip to the average person up there other than don't upload your income taxes?
And help us understand what should we not be uploading in terms or how do we upload it and keep it confidential, right? Because that is a good use case on my taxes.
, That's a great question. And just to be precise we're talking about interacting again with a large language model like Claude or Gemini or Chachi pt or any kind of system, AI or not, where you're handing your data off to some third party.
The challenge here is that like chat t and the their Elk have this lovely interface that seems like a friend that you're talking to. If I said to you like, would you mail all your tax returns to Microsoft? You might say, hell no, I'm not gonna do that. But then when you put it into Bing to answer a question, for some reason that feels differently.
Unfortunately, I think there's no, I don't have a silver bullet beyond be aware of privacy and but I agree with Michelle that like putting all this responsibility on consumers is not the right way to do it. And unfortunately that's the way we do it though in California. My advice to you is, yeah, don't put anything into a chat bot.
That is, know about ferpa, know about hipaa. Don't put your healthcare information in there. Don't put your personal financial information in there. Anything that you wouldn't leave in front of your house, don't put into a chat bot. I think my takeaway is that expecting your average consumer to know all that stuff is really not fair.
I think there needs to be another regime that protects people a little bit more clearly. 'cause my mom doesn't understand that stuff.
Yeah. And you're right, but I should post on Facebook like my, all my passwords, bank account numbers, password.
That's helpful.
Let's talk, we were talking earlier, we are all parents of fairly young children here.
The things in the schools scare me. I have three daughters, 10 and under. My oldest is gonna go to middle school next year. Somebody could walk down a hall and you could take a body image of them and make all sorts of fake things.
How as parents, how as policy makers do we protect that from happening? Because it seems well beyond what a K to 12 school could system can manage or universities.
I'll just say from a law perspective, we're actively thinking about that at the law school all the time, right? Because our students are adults who are gonna be expected to know how to properly use AI when they become attorneys.
We're really trying to integrate the responsible training of AI from day one in our legal writing. Classes, for example, for K through 12. As a parent, I really hope that we have phone free schools coming down the pipeline. I think that's a really good approach to force children to talk to each other instead of to phones and eliminate that ability to take that picture in the hallway.
If you don't have a phone with you, for example kids will always find a way to get out of. Also educating youth about, here's why we're doing this is a critical piece here because I think once youth understand the risk, they're gonna be much more on board with the do with, actually going along with the rules.
I would agree. I think K 12 has this real challenge right now that they're grappling with. And I talked to teachers a little bit. How do you as Michelle said, teach students use AI responsibly? In particular, thinking about things like chat, GPT and in some ways it's exposing an old problem that's been around for a while, which is on one hand, how do you educate students and how do you evaluate that they're learning?
And on the other hand, how do you prevent cheating or plagiarism? Teachers, I think, God bless 'em, they're already overworked and underpaid and don't have enough time to figure out all the stuff they're trying to do, and now they have this new thing on top of 'em that they don't understand that well, and that students maybe understand better.
And they're trying to figure out, what's the role of a large language model in my class? And so some conversation is well just ban it all together. We're not, we're gonna, everything's on pencil and paper. We're not gonna allow students access to computers in our classroom. And then on the other end of the extreme, you have, look, this stuff is coming. Students need to use it. Let's embrace it. Let's figure out how to incorporate in a positive way into our classrooms. But that's really hard too, right? It's really new and fast, and teachers don't have the time or expertise or the support maybe from their administrators to do that.
And so I think everybody's trying to find this happy middle ground here where. They can help students learn how to use these tools effectively, but they can also make sure that, our kids aren't just generating all their essays using chat GBT, but, and then, my punchline that makes all my professor friends mad is I think this is also maybe exposed something in the educational system, which is maybe that two page essay about Jane Eyre that everybody has to write as a freshman is not that great of an assignment.
Maybe we should be a little smarter about what we're asking our students to do and not just do the same cookie cutter crap that chat GPT can generate, but really think about how to measure learning. But again, in this sort of institutionalized educational structure, teachers don't have time to completely redo their syllabus and think about all this stuff.
It's a real conundrum that K 12 students, teachers are facing. And, I really feel for 'em.
Aside from the policy, right? I use all of these tools for work, and I'm playing with a new one now to help me manage my calendar and my task list better, and it's geared towards hopefully saving me time, although the learning curve is way too steep at the moment, I'm using chat GBT to help write something.
Now it's my ideas. I'm filling in the blanks. I'm just having that compose it all. But how do you teach students that's not necessarily cheating in the workplace and they've gotta find that balance you're talking about, I think from 📍 📍 an education policy standpoint, finding that balance is tricky.
It is, and I can talk about how it happens in the classroom. I can't really talk about the policy part but in the classroom, 'cause a lot of our rhetoric and language professors are using chat GBT for exactly what you said, helping students get their ideas out. If we assume that the point of the exercise is not to make sure that, subject verb agreement and that you can, conjugate correctly,
but the point of the exercise is for you to be able to express yourself better. Then the teaching in the classroom becomes alright, formulate your ideas, think about what a good point is what are you trying to say? We can use a large language model to help with your grammar, which is great for students with an ESL background, for example, or students with autism whoever, maybe have some other learning deficiency.
These tools can be really supportive. It becomes a new way for the teacher to think about, let's talk about how an article is constructed. Let's talk about how you make a thesis statement, and then you can use the tool to assist you. A different way of thinking about it than me saying Hey, I talked about Thoreau for 20 minutes.
You go home and write your two page paper and come back tomorrow. I want Michelle to maybe talk a little bit about what could happen at policy level there.
I think in the classroom it is a challenge, right?
This fall will be the first time that I'm gonna allow my seminar students to use chat, GPT or equivalent anthropic, or, clot or Gemini or any of those. I'm thinking, and I haven't thought this entirely through yet, that I'm going to ask them to, in addition to their papers, I wanna see. They can cut and paste that for me.
I wanna see what they actually used it for. When they're turning in an outline, I also want them to turn in a transcript of their communication with the chat bot, because that will then enable me to see how they're using it. And then we'll have a conversation in class about the different ways that people are using it.
Where's the line? And I think if the class comes up with rules, which you can do with adults, that makes it much easier then to see why. And how you should have these rules with K through 12, it's a whole different story, right? Because you have a different dynamic in the classroom and you have more control as a policymaker as to what's happening in the classroom.
I think that I. There needs to be some sort of a focus on learning together. I think youth especially respond really well when adults say, I need a little bit of your help with this. Can you help me figure this out? And so I would love to see some of these working groups. And state legislators include not just teachers, but the students themselves.
And this is something Chris and I have actually spoken about, is bringing the youth and their parents and their teachers in together and say how would you use this? Because frankly, as a parent, I have no idea what kinds of uses a youth might use LLMs for something I haven't even thought about.
And I think it's a mistake to assume that policymakers can do this by themselves. I think we need to have that multi-stakeholder approach in education to really get a handle on this for the future.
And let's talk about jobs and policy around jobs.
You've seen we're recording this near the end of June of 2025 there's been a couple thousand jobs lost in the Bay Area where companies sometimes they say AI is replacing these jobs. From a policy perspective, what should we consider to protect those jobs but allow for the innovation?
Where's that line? How do we tackle it?
I think from a policy perspective, I'm gonna. Jump on my soapbox again around like bringing in the workers to find out exactly what they need and what they could use it for. I have a friend who's a translator in San Francisco. He works for a company as a translator, a Korean translator.
They had four people in their department. They now have two. And one of the reasons that he made it through the round of layoffs is because he decided to learn how the AI was working, how he could use it best, and he told his company. Here's the best way that we could use it. And still, but you still need human oversight.
One way to protect yourself or your constituents is to say, let's learn how to use this in a way to make your job more efficient. That we recognize that this threatens jobs. Let's try to see how we can help folks to adapt to this new era. And that is something policymakers could put money and incentives toward.
I see, I have a lot of junior software engineers graduating from USF and a lot of concern that their jobs are going away because LLMs are great at generating. Cookie cutter code. And it is been the same advice. Get good at using LLMs, get good at using AI because a lot of companies have mandated as a cost saving measure.
We want our employees to be using GitHub copilot or Gemini code or whatever tool they like. Those employees who are really good at it or who understand how to use it productively are doing great. They're getting lots of jobs. They, all the startups are swooping them up. 'Cause I think what people are finding is that AI is really useful as a tool for generating.
Little modules, but you still need a human to do the architectural work. You still need a human to talk to the customers, to understand the value proposition to bring everything together and really be the guiding force. And I think that's the direction you're gonna see AI move in the software industry I'll be more of the orchestrator and I'll have some tools building a lot of the little.
Helper things, but just as we said at the beginning, I've still gotta be able to look at that and say it's correct. There's still gotta be some human expertise in the loop, and so being able to work with that stuff really well is a great job protector. If all you can do is the thing that everybody's been doing for 30 years, then yeah, your job might be at risk.
I was talking to somebody about executive assistants. Yeah. I think we'll still need one. But instead of maybe an executive assistant at a Google or an Apple, might use, might cover two or three principles, maybe now they're covering six. 'cause AI is supposed to speed up their job and then you need half as many.
What other policy questions have I not asked so far that. Or ideas do you wanna bring up before I leave?
Okay,
That's a big question, Jared. I think the immediate issues here in order to determine what policies we should be doing is to look at state and federal courts, because if you wanna know what your policy should be, you should see what issues are being litigated.
These are the disputes that people have deemed important or significant enough to put legal resources into them. If you want a reactive approach to policymaking, take a look at what's happening in the courts. Expect that it's gonna be coming to a court near you.
New York Times versus open ai, this copyright infringement stuff like that's coming. You could get ahead of that using a proactive legislative approach by saying, we expect something like this to come to our jurisdiction. If I were a state legislator, that's.
The approach I would be taking is talking to the industry and to consumer advocates and looking at court cases to see what issues are coming down the pipeline.
I maybe add as one other thing. Incentivizing AI research in areas of social need. And that's a direct pushback to what's happening in the federal government today, where we're taking funding away from places like NIH and we're expecting private industry to do all the research.
And nothing against private industry, but their incentives are very short term, right? Open AI is thinking about quarterly profits. They're not thinking about a decade from now. I would love to see state and federal governments develop policy that says, Hey, you know what? Here's some beneficial areas that we think AI could help us in.
Renewable energy, healthcare, things like that. We're gonna specifically incentivize that kind of work. And it might be done by the private sector through contracting, that's usually what happens. But you have a different set of incentives beyond we gotta hit our quarterly profits, we've gotta be the first to market with this next new tool.
That's where you get the kind of moonshot work that will really make AI transformative and not just a cool new, pet rock of the 2025 era.
I think there's a lot in there for people to think about, but it's really, I like how you both framed it as here's like a strategy approach than the specific recommendations. A use case if you don't wanna name the company that you are using AI for before we leave here.
Ooh, I'll go first. My, semi-serious description of large language models is that they're really good BS generators, right? They're really good at making up fun stuff, and they're like your friend at the bar who is pretty smart and knows a lot of trivia, but is sometimes wrong. And so they're great for cases where you're okay.
With that kind of output. I use ChatCPT and Gemini to play Dungeons and Dragons all the time, and to generate DD characters for real, because it's great at that. And I don't care if it, there is no truth, it's just we're having fun together. And so stuff like that where there's not a wrong answer, they're great at that.
And they're really fun.
And I use it in a less fun way to ask technical questions of things that I don't know around programs for. For example, for the center, I have to manage budgets, right? This is not my favorite thing. I didn't become a law professor to manage budgets. I've had to learn Excel, and I decided I have a choice.
I can sit through an eight hour Excel class, or I can ask the technical experts. In through these chat bots, clot or chat gt how do I get a cell to copy into this separate column? And I'm finding for those technical questions, it's actually quite helpful because if it's wrong, I know right away that it's wrong, right?
Because I know it's not working like it. I can't do the technical thing I'm asking it to do on this specific program. I've used it in that context quite successfully because I know immediately if it's wrong or right. I am not using it for anything around my writing. I feel pretty confident in my writing style.
I don't feel like I need it for that. I'm also not using it for anything beyond basic landscape research. I'm reading those cases and those articles myself so that I can draw my own conclusions for them. But for the little technical things like that, I don't wanna sit through a class to learn like, how do you copy this information and make it a sum?
For example in Excel, it's actually quite helpful.
I'll maybe add it like notebook lm, if you use that, is really useful for summarizing long articles. If I just do, I wanna read this 50 page report, I can throw it into a notebook, lm, it'll give me a three paragraph summary, that's 90% right. But that's enough for me to know, okay, I wanna spend a couple hours reading this or that's not really my thing.
It comes down to discernment As a human it doesn't take that away from me.
Great point. This was great and a very different topic, but I think you gave us a lot to think about Christopher Brooks and Michelle needs from the University of San Francisco. I really appreciate you both being here today.
Thank you.
For having us,
Wait, don't leave yet. Hit subscribe. Make sure you get the weekly updates. We have a new episode every Wednesday for stuff happening in the East Bay. In the meantime, follow me on LinkedIn, Jared Asch, or check out our firm where we have a weekly newsletter and blog at Capstone Government Affairs on LinkedIn.
Thanks for joining us today on the Capstone conversation.




Comments