top of page
Latest News
At IsAdvice & Consulting, we keep a vigilant eye on legislative developments and events related to Digital Services. If you're wondering how these changes might affect your business, don't hesitate to explore our confidential coaching and training program or our ethical governance services. We're here to help!
National Security Memorandum on Advancing the US Leadership in AI.
Here are the main action items for agencies from the executive memorandum on strengthening U.S. leadership in AI for national security:
-Enhance AI Leadership and Innovation
-Strengthen National Security
-Promote Ethical AI Use
-Protect Civil Liberties
-Foster International Collaboration
-Advance Global AI Governance
These steps aim to keep the U.S. at the forefront of AI technology while upholding democratic values and enhancing global cooperation. It's encouraging to see this momentum toward effective governance!
For more details, check the full memorandum.
-Enhance AI Leadership and Innovation
-Strengthen National Security
-Promote Ethical AI Use
-Protect Civil Liberties
-Foster International Collaboration
-Advance Global AI Governance
These steps aim to keep the U.S. at the forefront of AI technology while upholding democratic values and enhancing global cooperation. It's encouraging to see this momentum toward effective governance!
For more details, check the full memorandum.
Biden-Harris Administration Unveils Critical Steps to Protect Workers from Risks of Artificial Intelligence
Since taking office, President Biden, Vice President Harris, and the Biden-Harris Administration have been proactive in leveraging artificial intelligence (AI) to drive innovation and create opportunities, while ensuring that workers benefit from these advancements. President Biden's Executive Order on AI mandates the Department of Labor to create key principles to protect workers and involve them in the development and use of AI technologies. Today, the administration is introducing these principles, with Microsoft and Indeed agreeing to implement them in their workplaces.
NIST's Draft Guidance on Managing Risks in Generative AI
The U.S. National Institute of Standards and Technology (NIST) released four new draft publications on April 29, 2024, aimed at addressing the evolving landscape of artificial intelligence (AI) risks. These publications include the AI RMF Generative AI Profile, which outlines 12 risks and 400 actions for mitigating risks specific to Generative AI (GenAI). Building upon the NIST AI RMF framework, these drafts provide guidance on managing AI risks throughout the lifecycle, emphasizing the need for tailored approaches to address sector-specific challenges and ensuring cross-sectoral alignment in risk management strategies.
The emergence of Generative AI poses unique challenges, intensifying AI risks compared to traditional software. These risks span the AI lifecycle and extend to societal impacts, such as disinformation and long-term inequality. NIST's draft publications identify and categorize risks associated with GenAI, offering a comprehensive set of actions tailored to different stakeholders and organizational contexts. By addressing uncertainties in technology scalability, opaque training data, and diverse stakeholders, NIST aims to enhance risk governance and promote responsible AI development and deployment practices.
The emergence of Generative AI poses unique challenges, intensifying AI risks compared to traditional software. These risks span the AI lifecycle and extend to societal impacts, such as disinformation and long-term inequality. NIST's draft publications identify and categorize risks associated with GenAI, offering a comprehensive set of actions tailored to different stakeholders and organizational contexts. By addressing uncertainties in technology scalability, opaque training data, and diverse stakeholders, NIST aims to enhance risk governance and promote responsible AI development and deployment practices.
OMB Guidance on Federal AI Use
The Office of Management and Budget (OMB) has issued its final guidance on AI governance, innovation, and risk management for federal agencies, building upon President Biden’s Executive Order 14110. The guidance emphasizes pre-deployment AI impact assessments, transparency around AI systems and their impacts, and additional procurement obligations. Updates to the guidance include enhanced transparency requirements, recommendations for biometric identification systems, mechanisms for opting out of AI decision-making, and expanded consultation processes.
National Telecommunications and Information Administration calls for audits and investments in trustworthy AI systems.
The National Telecommunications and Information Administration (NTIA) under the Department of Commerce released the AI Accountability Policy Report, which includes recommendations aimed at managing risks associated with AI while harnessing its benefits. One significant recommendation is for independent audits of high-risk AI systems. These policies are intended to ensure that AI developers and deployers are accountable for their systems' behavior, promoting transparency, independent evaluations, and consequences for unacceptable risks or unfounded claims. The report outlines eight sets of policy recommendations falling into three categories: Guidance, Support, and Regulations.
NIST Cybersecurity Framework (CSF) 2.0
The NIST Cybersecurity Framework (CSF) 2.0 serves as a guide for various entities, including industry, government agencies, and organizations, in managing cybersecurity risks. It outlines high-level cybersecurity objectives applicable to organizations of all sizes, sectors, and levels of maturity. The framework does not dictate specific methods for achieving these objectives but provides a taxonomy for understanding, assessing, prioritizing, and communicating cybersecurity efforts. It also directs users to online resources for further guidance on implementing practices and controls aligned with the desired outcomes.
FCC Makes AI-Generated Voices in Robocalls Illegal
Feb. 08, 2024 - The Federal Communications Commission (FCC) has declared that calls made with AI-generated voices are considered "artificial" under the Telephone Consumer Protection Act (TCPA). This ruling, effective immediately, criminalizes the use of voice cloning technology commonly used in robocall scams. State Attorneys General now have additional legal means to pursue those responsible for such fraudulent calls.
NIST Release of Cyber Requirements for Controlled Unclassified Information in Nonfederal Systems and Organizations
The National Institute of Standards and Technology (NIST) has released a new draft of cybersecurity requirements aimed at protecting sensitive unclassified information in non-Federal systems, including those used by government contractors. This update, the third iteration of NIST special publication 800-171, follows a year of data collection and public feedback. The revisions include combining security requirements for consistency, eliminating control tailoring categories for non-Federal organizations, and refining security requirements for protecting controlled unclassified information. Federal agencies are provided with recommended security measures for safeguarding such information in non-Federal systems. The public can offer comments until January 12, 2024, with the final rule expected in early 2024.
Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence
Today, President Biden is issuing a groundbreaking Executive Order aimed at ensuring America's leadership in harnessing the potential and mitigating the risks of artificial intelligence (AI). This order introduces new standards for AI safety and security, safeguards Americans' privacy, promotes equity and civil rights, advocates for consumer and worker interests, fosters innovation and competition, and enhances American global leadership in AI. This initiative is part of the Biden-Harris Administration's comprehensive approach to responsible innovation and builds upon previous efforts, including voluntary commitments from 15 leading companies to promote the safe and trustworthy development of AI. The Executive Order outlines specific directives for action.
GAO Releases Federal AI Implementation Insights
Government Accountability Office Unveils Insights on Federal AI Implementation: Key Findings & Recommendations
In a comprehensive analysis, the Government Accountability Office (GAO) delves into the intricate landscape of Artificial Intelligence (AI) integration within federal agencies. The latest report meticulously examines compliance with legal requirements and policy, shedding light on the evolving role of AI in government operations.
In a comprehensive analysis, the Government Accountability Office (GAO) delves into the intricate landscape of Artificial Intelligence (AI) integration within federal agencies. The latest report meticulously examines compliance with legal requirements and policy, shedding light on the evolving role of AI in government operations.
NIST Interagency Report
NIST IR 8473 - Cybersecurity Framework Profile for Electric Vehicle Extreme Fast Charging Infrastructure
Explore the intricacies of the Electric Vehicle Extreme Fast Charging (EV/XFC) ecosystem through this comprehensive Cybersecurity Framework Profile. Encompassing four key domains - Electric Vehicles (EV), Extreme Fast Charging (XFC), XFC Cloud or Third-Party Operations, and Utility and Building Networks - the Profile aligns with the NIST Cybersecurity Framework Version 1.1. Offering voluntary guidance, it empowers stakeholders within the EV/XFC industry to tailor Profiles specific to their organizations. This resource facilitates understanding, assessment, and communication of cybersecurity postures as integral components of their risk management processes. It is designed as a supplementary tool, intended to complement rather than replace existing risk management programs, as well as cybersecurity standards, regulations, and industry guidelines prevalent in the EV/XFC sector.
Acknowledgment: The National Cybersecurity Center of Excellence (NCCoE) recognized Pamela K. Isom, CEO of IsAdvice & Consulting, among many other individuals for their valuable contributions, discussions, and insights throughout the development of this profile.
Acknowledgment: The National Cybersecurity Center of Excellence (NCCoE) recognized Pamela K. Isom, CEO of IsAdvice & Consulting, among many other individuals for their valuable contributions, discussions, and insights throughout the development of this profile.
Blueprint for an AI Bill of Rights
The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People was published by the White House Office of Science and Technology Policy in October 2022. This framework was released one year after OSTP announced the launch of a process to develop “a bill of rights for an AI-powered world.
The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees
Employers now have a wide variety of computer-based tools available to assist them in hiring workers, monitoring worker performance, determining pay or promotions, and establishing the terms and conditions of employment. Employers may utilize these tools in an attempt to save time and effort, increase objectivity, or decrease bias...
Office of the Privacy Commissioner for Personal Data, Hong Kong (PCPD) issues guide on AI in Banking
As the widespread adoption of generative AI (genAI) increases and opportunities to utilize genAI in the banking and finance sector continue to grow, this guide aims to discuss its ethical and responsible applications.
COLORADO AI ACT SIGNED BY GOVERNOR POLIS
Governor Polis has signed the Colorado AI Act, marking it as the first comprehensive AI regulation in the U.S. We're excited to share this historic milestone, which paves the way for robust AI governance and aims to reduce the risk of algorithmic discrimination.
Wisconsin requires labeling of AI-generated materials in campaign ads
The Wisconsin state legislature recently passed a bipartisan bill aimed at addressing the use of AI-generated deepfake content in political campaigns. The bill mandates disclosure of the use of "synthetic media," defined as content substantially produced by generative AI, in campaign advertisements. Campaigns must label such content with "Contains content generated by AI" at the beginning and end, with violators facing penalties of up to $1,000 per offense. Wisconsin joins several other states in passing similar legislation amid concerns that AI could be used to deceive voters and undermine election integrity leading up to the 2024 presidential election.
DC's Mayor Signs Order Defining DC’s AI Values and AI Strategic Plan
On February 8, 2024, Mayor Muriel Bowser signed a Mayor's Order detailing the steps the DC Government is taking to utilize artificial intelligence (AI) in public services and how both the government and the community can profit from this advancement. Additionally, the Mayor introduced DC's Artificial Intelligence Values Statement and Strategic Plan, which aims to guarantee that the District's adoption of generative AI remains in accordance with DC's core values.
Implementation of Standards for the Safe Use of Artificial Intelligence Across Virginia's Commonwealth
Governor Glenn Youngkin has issued Executive Order 30, introducing AI Education Guidelines for classrooms and implementing AI Policy and Information Technology Standards to safeguard Virginia's databases and protect individual data. Virginia is at the forefront of AI innovation with these pioneering standards and guidelines.
New Jersey Passes New Data Privacy Law.
On January 16, Governor Phil Murphy of New Jersey signed S332/A1971 into law, making New Jersey the 14th state to adopt comprehensive data privacy legislation. This new legislation mandates that certain entities, including internet websites and online providers, must notify consumers about the collection and disclosure of their personal data.
Effective date: 16 Jan 2025
Effective date: 16 Jan 2025
Governor Moore Announces Action to Transform Maryland Executive Branch Digital Services
Governor Wes Moore announced four key initiatives to transform Maryland's state government digitally: responsible AI use, user-centered design, equitable IT access, and strengthened digital infrastructure partnerships. Emphasizing modernization's importance, Moore aims to improve user experiences, accessibility, and cybersecurity. The Moore-Miller Administration, with the Maryland Department of Information Technology, is committed to delivering technology solutions for enhanced safety, efficiency, and productivity.
Catalyzing the Responsible and Productive Use of Artificial Intelligence," Maryland's Governor Moore's visionary initiative to transform the state's IT services.
Key Highlights:
AI Regulations: A comprehensive set of regulations to guide and govern the responsible deployment of Artificial Intelligence within the state.
Enhanced Accessibility: Policies ensuring seamless access to state platforms for all residents, fostering inclusivity and efficiency.
Collaborative Cybersecurity: Embracing a collaborative approach to cybersecurity, emphasizing shared responsibility and protection against digital threats.
Innovation Hub: The establishment of a dedicated office committed to driving user-centric digital innovations, promoting a forward-thinking and responsive government.
AI Regulations: A comprehensive set of regulations to guide and govern the responsible deployment of Artificial Intelligence within the state.
Enhanced Accessibility: Policies ensuring seamless access to state platforms for all residents, fostering inclusivity and efficiency.
Collaborative Cybersecurity: Embracing a collaborative approach to cybersecurity, emphasizing shared responsibility and protection against digital threats.
Innovation Hub: The establishment of a dedicated office committed to driving user-centric digital innovations, promoting a forward-thinking and responsive government.
New Jersey Governor Establishes State Artificial Intelligence Task Force
Building upon New Jersey’s legacy of leading the next frontiers of discovery and innovation, Governor Phil Murphy today established an Artificial Intelligence Task Force charged with studying emerging artificial intelligence (AI) technologies. The Task Force will be responsible for analyzing the potential impacts of AI on society as well as preparing recommendations to identify government actions encouraging the ethical use of AI technologies...
NYC Artificial Intelligence Action Plan; Local Law 144
Artificial intelligence (AI) is often described as a revolutionary technology that is rapidly changing the way we work, travel, conduct research, deliver healthcare, provide public services, and more. In particular, the emergence of groundbreaking generative AI tools over the last year has simultaneously sparked tremendous excitement, profound concern, and intense speculation about their potential far-reaching impacts on humanity...
Commonwealth of Virginia Executive Directive
By virtue of the authority vested in me as Governor, I hereby issue this Executive Directive to ensure the responsible, ethical, and transparent use of artificial intelligence technology by the state government in order to protect the rights of Virginians and develop targeted, innovative uses for this emerging...
Governor Newsom Signs Executive Order to Prepare California for the Progress of Artificial Intelligence
California is the global hub for generative artificial intelligence (GenAI) – we are the natural leader in this emerging field of technology – tools that could very well change the world. To capture its benefits for the good of society, but also to protect against its potential harms, Governor Newsom issued an executive order today laying out how California’s measured approach will focus on shaping the future of ethical, transparent, and trustworthy AI, while remaining the world’s AI leader...
Oklahoma Executive Order
Oklahoma Executive Order
The U.S. Office of Personnel Management (OPM) in collaboration with the Office of Science and Technology Policy (OSTP) is issuing specific guidance pursuant to Public Law 116-260, The AI in Government Act of 2020 (the Act). In accordance with the Act, OPM is required to identify key skills and competencies needed for positions related to Artificial Intelligence (AI)...
The U.S. Office of Personnel Management (OPM) in collaboration with the Office of Science and Technology Policy (OSTP) is issuing specific guidance pursuant to Public Law 116-260, The AI in Government Act of 2020 (the Act). In accordance with the Act, OPM is required to identify key skills and competencies needed for positions related to Artificial Intelligence (AI)...
Chicago Artificial Intelligence Video Interview Act
Disclosure of the use of artificial intelligence analysis. An employer that asks applicants to record video interviews and uses an artificial intelligence analysis of the applicant-submitted videos shall do all of the following when considering applicants for positions based in Illinois before asking applicants to submit video interviews...
Virginia's Consumer Data Protection Act
The Consumer Data Protection Act empowers individuals with the right to access and request the deletion of their personal data held by businesses. Moreover, it mandates companies to conduct comprehensive data protection assessments, specifically concerning the processing of personal data for targeted advertising and sales. Notably, this legislation includes provisions to regulate the utilization of de-identified data, ensuring that even modified data, no longer directly identifying individuals, is subject to scrutiny.
Division of Insurance Partners with ORCAA to Protect Colorado Insurance Consumers
Governor Polis of Colorado signed Senate Bill 21-169 into law last summer, aimed at restricting insurers' use of external consumer data to prevent unfair discrimination based on various factors such as race, color, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity, or gender expression. The Division of Insurance (DOI) is collaborating with O'Neil Risk Consulting & Algorithmic Auditing (ORCAA), led by Cathy O'Neill, to implement the law. The legislation responds to concerns about the potential harm to protected groups, including black, indigenous, and people of color, resulting from the unchecked use of big data in insurance practices, despite the technology's potential benefits for both insurers and consumers.
bottom of page