January 18, 2024
In today’s interview, Gaurav “GP” Pal from stackArmor presents a warning call to let people know that a sudden jump into any technology can present unintended consequences. He offers suggestions to make AI meaningful, safe, and dependable.
Everyone remembers Bill Murray on Groundhog Day having the same experience over and over. After a few years of experience in federal technology, we are looking at something similar.
First, the commercial sector I dazzled. Next, federal technology leaders feel like they must get in the same boat. A few years ago, many agencies jumped headfirst into the cloud, and then FedRAMP had to come along to put some guides on the cloud experience.
Fast forward to 2024, today’s Artificial Intelligence is falling into the standard pattern. Federal technology leaders can feel left out and make a quick adaptation, then later guidance emerges to rectify some of the unintended consequences of artificial intelligence.
Researchers are seeing security vectors that are unique to AI. What security principles should be considered when putting together a Large Language Model? Can a bias be introduced?
What controls do you have in place today that you can apply to your agency’s AI journey?
To bring in a diverse set of opinions to offer guidance, GP discusses his company’s development of an AI Risk Intelligence Center of Excellence. They have assembled a high-power group of leaders with federal experience to provide training models and actionable steps for making a safe and secure transition to AI.
If you enjoyed this episode, then you may want to listen to Ep. 117 Putting Agile in Work Boots for Federal Projects.
John Gilroy appeared on National Public Radio in Washington DC for 25 years; during that time, he wrote 523 technology columns for The Washington Post.
Currently, John is an award-winning lecturer at Georgetown University. Forgot to mention — he has recorded over 1,000 podcast interviews.