Ending the use of affirmative action in college admissions became a catalyst for empowering workforce development and skills building for me this past quarter. Recognizing that minority students are subjected to more difficulties with college acceptance, compounded with escalating tuition rates and data specifying that in January 2022 18 out of every 10,000 people experienced homelessness across America, I launched a capacity-building program in AI and cybersecurity for businesses and communities.
Given the surge of micro-credentialing by some of the large solution providers like AWS, Microsoft, MIT, and others in generative AI and cybersecurity fundamentals, my conversations with philanthropists, non-profits, Federal government, and academia leaders emerged as requests to make certifications and micro-credentials transferable as college credits and as job experience. “This will provide a pathway to furthering STEM education, and support justification for college acceptance, workforce development, and growth”.
During speaking engagements and confidential coaching and training sessions, I hear both excitement and some reluctance to use AI. I recall two separate instances where individuals expressed concerns that “AI is racist”. These clients were frustrated that models used their voice to determine the language of response irrespective of requests for replies in English. I explained the training process, natural language processing, and how bias is integrated through data and algorithms causing inaccurate and sometimes harmful consequences.
I recall prompting an AI for ethics content for a presentation to a group of 10th-grade students. I also requested an image of the classroom participants. After many attempts, the students reflected in the images were not people of color. What message does that send? That students of color are not taught AI ethics or perhaps uninterested? This was a teachable moment.
And then there is mistrust and fear of hallucinations, existential threats, dominance, etc. Here is a friendly reminder, AI is not human, and it is not going to control our brains. It doesn’t have mood swings, and hallucinations is an inaccurate term, in my opinion, for machine responses that one may be unable to rationalize now. AI is a machine that is only as good as the data it accesses, the training that it receives, and its algorithms.
My capacity-building program entails balancing innovation and risks. I challenge community leaders, citizens, and residents to “be clear-eyed, become a part of the multidisciplinary and interdisciplinary safety management teams, be representative in the data, practice ethical AI governance, and don’t allow yourself to get left behind”. Experiential learning in digital forensics, deepfake detection product evaluations, large language model infrastructure vetting, or applying game theory to detect malware and adversarial patterns are opportunities that businesses can provide.
I team with clients to learn prompt engineering at the pace that it is evolving and use AI for productivity including grant writing and customer service excellence. I share examples of recent discoveries such as how AI is used to detect medical conditions before they surface, and I provide training in next-generation ethics, compliance, and cybersecurity stewardship, promoting data quality and privacy in a data-driven world.
My mission remains, to empower equity, opportunity, and sustainability through critical, emerging technologies. I welcome your support and invite you to join me in Capacity-Building: Developing and Strengthening AI and Cybersecurity Skills Across Communities.
CEO & Founder, IsAdvice & Consulting LLC
References:
Comments