You are on page 1of 3

CS158-2 Activity #5

Instructions:

Answer the following questions in less than 100 words each.

Provide 10 legal issues stated in the article “Legal and human rights issues of AI: Gaps, challenges and
vulnerabilities” by Rowena Rodrigues. For each legal issue, provide answers on the below questions.

Privacy Concerns:
• Description: AI systems often collect and process vast amounts of personal data, raising
concerns about individuals' privacy.
• Issue Impact: It can lead to unauthorized data access, identity theft, and surveillance.
• Current Addressing: Privacy regulations like GDPR in Europe and CCPA in California set rules for
AI data handling.
• Resolution Timeline: Ongoing efforts, but likely to be a long-time issue.
• Opinion: Stricter enforcement of existing regulations and the development of AI-specific privacy
laws can address this issue.
Bias and Fairness:
• Description: AI algorithms can produce biased or unfair outcomes, often due to biased training
data.
• Issue Impact: Discriminatory decisions in hiring, lending, and law enforcement.
• Current Addressing: Research on bias mitigation techniques and fairness-aware algorithms.
• Resolution Timeline: Ongoing research, but may persist due to complex challenges.
• Opinion: Continued research and transparency in AI development can help mitigate bias.
Accountability and Liability:
• Description: Determining who is responsible for AI errors or harm can be challenging.
• Issue Impact: Difficulty in seeking legal remedies for AI-related accidents.
• Current Addressing: Legal frameworks are evolving to assign liability, but it's a complex issue.
• Resolution Timeline: It may take time to establish clear liability standards.
• Opinion: Developing AI-specific liability laws and contracts can provide clearer accountability.
Intellectual Property:
• Description: Issues around AI-generated content and inventions, and ownership of AI-generated
work.
• Issue Impact: Confusion over intellectual property rights.
• Current Addressing: Intellectual property laws are adapting to address AI-related challenges.
• Resolution Timeline: Ongoing adjustments to IP laws will continue.
• Opinion: Clarity in intellectual property rights for AI-generated content is essential.
Ethical Dilemmas:
• Description: AI can be used for ethically questionable purposes.
• Issue Impact: Ethical concerns in AI applications like autonomous weapons.
• Current Addressing: Debates and discussions on ethical AI development and usage.
• Resolution Timeline: Continuous efforts to establish ethical guidelines.
• Opinion: Widespread adoption of ethical AI principles and international agreements can help.
Transparency and Explainability:
• Description: AI models can be complex and difficult to interpret, raising concerns about
transparency.
• Issue Impact: Lack of transparency can lead to mistrust and challenges in decision justification.
• Current Addressing: Development of explainable AI (XAI) methods.
• Resolution Timeline: Ongoing efforts to improve transparency.
• Opinion: Promoting XAI techniques and regulations on transparency can address this issue.
Cybersecurity Risks:
• Description: AI systems can be vulnerable to cyberattacks, and AI can be used for malicious
purposes.
• Issue Impact: Security breaches and AI-driven cyber threats.
• Current Addressing: Enhanced cybersecurity measures and AI security research.
• Resolution Timeline: Ongoing as cybersecurity threats evolve.
• Opinion: Continuous improvement in AI security and proactive defense mechanisms is crucial.
Job Displacement and Employment Law:
• Description: AI automation can displace human jobs, raising concerns about employment law.
• Issue Impact: Job loss and potential legal challenges regarding worker rights.
• Current Addressing: Labor laws may need updates to account for AI's impact on employment.
• Resolution Timeline: The evolution of labor laws will be ongoing.
• Opinion: Developing reskilling programs and safety nets for displaced workers is essential.
Regulatory Gaps:
• Description: Rapid AI advancements often outpace regulatory frameworks.
• Issue Impact: Lack of clear rules and standards for AI development and deployment.
• Current Addressing: Governments are working on AI regulation, but it lags.
• Resolution Timeline: Regulatory frameworks may take time to catch up.
• Opinion: Collaboration between governments, industry, and experts can expedite regulatory
development.
Data Security and Ownership:
• Description: Questions regarding who owns and controls data used by AI systems.
• Issue Impact: Data breaches and disputes over data ownership.
• Current Addressing: Data protection laws and data ownership agreements.
• Resolution Timeline: Ongoing legal and technological developments.
• Opinion: Strengthening data protection laws and promoting data ownership transparency can
help address this issue.

1. Artificial Intelligence issue and description

Some of the issues with AI include bias, privacy concerns, accountability issues, moral quandaries, a lack of
transparency, and cybersecurity risks. AI bias can produce discriminatory outcomes, and data access poses
privacy concerns. Because AI makes independent judgments, identifying responsibility and culpability for AI errors
is difficult. Ethical quandaries include contentious uses such as autonomous weaponry. The opacity of AI models
raises transparency concerns. Cybersecurity risks include illegal usage and AI faults. Job displacement occurs as
a result of automation. The development of proper AI legislation is impeded by regulatory gaps. Data security
and ownership challenges include control over large databases. Intellectual property concerns AI inventions,
posing complex dilemmas for AI development and administration.
2. Why is it considered as an issue and how it impacts AI?

Bias in AI is a critical problem that disproportionately impacts impoverished communities since it might provide
unjust and discriminating outputs. It has an impact on AI by preserving and exacerbating societal prejudices
contained in training data, which can lead to biased decisions in domains such as recruiting, lending, and law
enforcement. This not only jeopardizes equity and fairness, but it also undermines public trust in AI systems.
Furthermore, biased AI solutions may have a negative impact on an organization's reputation and legal standing.
Bias in AI must be addressed in order to ensure just and moral AI applications and encourage trust in these
technologies.

3. How is it currently being addressed?

To address prejudice in AI, many strategies are being utilized. Researchers are working on bias reduction
strategies such as adversarial networks and resampling training data. Organizations such as the IEEE and ACM
are working to set ethical standards and best practices. Governments are creating regulations that require AI
systems to be fair and transparent. Systems for auditing and accountability are also being built to identify and
correct biased judgments. Businesses are investing in various data sources and teams to reduce bias in the data
used for AI training. A multidisciplinary effort involving technology, politics, and education is presently underway
to properly address this issue.

4. Will it likely to be resolved soon? Or will it be a long-time problem?

It is unlikely that bias in AI will be abolished quickly. It's a multifaceted problem with roots in data collection,
societal biases, and computational difficulties. Fairness and bias reduction in AI systems are ongoing and
emerging issues. Despite significant advances in research, laws, and ethical norms, eradicating bias is difficult
due to the dynamic nature of data and human prejudices. Continuous efforts and collaboration between
researchers, policymakers, and industry stakeholders will be essential to ensure that AI systems are as fair as
possible. However, this issue is likely to remain a problem in the long run.

5. In your own opinion, how the issue can be addressed?

To overcome prejudice in AI, an all-encompassing solution is required. To reduce initial biases, a strong emphasis
should be placed on diverse and representative training data. Transparent model development and
documentation can help stakeholders better understand AI judgments. Continuous research into bias mitigation
solutions, as well as extensive testing and auditing methodologies, can help to improve fairness. In order to raise
awareness, developers must get ethical AI education and training. Finally, regulatory bodies must take action by
enforcing standards and sanctioning biased AI systems. Cooperation among technologists, politicians, ethicists,
and the general public is required to ensure that AI systems promote fairness and equity.

You might also like