Education logo

Addressing Social and Legal Issues in AI and ML: Proposed Norms

Addressing Social

By L.G.A.R.M.RawzanPublished 3 years ago 3 min read
Addressing Social and Legal Issues in AI and ML: Proposed Norms
Photo by Ian Schneider on Unsplash

Introduction :                                                 The rapid advancement of Artificial Intelligence (AI) and Machine Learning (ML) technologies has brought numerous benefits to society. However, these technologies also raise significant social and legal concerns that need to be addressed. To ensure the responsible and ethical development and deployment of AI and ML systems, it is crucial to establish norms and guidelines. This article proposes a set of norms that can help mitigate the potential risks and challenges associated with AI and ML, focusing on social and legal dimensions. These norms aim to strike a balance between innovation and safeguarding human rights, privacy, and fairness.

• Transparency and Explainability :         To address concerns related to the lack of transparency and explainability in AI and ML systems, it is essential to establish norms that promote openness and accountability. Developers and organizations should strive to provide clear documentation of the algorithms, data sources, and decision-making processes underlying their AI and ML models. This transparency enables better understanding and scrutiny of the technology's functioning, mitigating the risk of biases, discrimination, and unfairness.

• Ethical Data Collection and Usage : Norms should be established to govern the collection and usage of data in AI and ML systems. Privacy and data protection must be respected, and informed consent should be obtained from individuals whose data is being utilized. Moreover, efforts should be made to ensure that the data used for training AI models is diverse, representative, and free from biases. Norms can also encourage the use of techniques such as differential privacy to safeguard sensitive information while still enabling effective learning.

• Fairness and Non-Discrimination :      To address the issue of algorithmic biases and discriminatory outcomes, norms should promote fairness in AI and ML systems. Developers should strive to avoid biases in training data and algorithms, and regularly evaluate and mitigate biases that may emerge during system deployment. Furthermore, guidelines should be established to ensure that AI and ML systems do not disproportionately impact certain individuals or groups based on factors such as race, gender, or socioeconomic status.

• Accountability and Liability :           Norms should define the responsibility and liability of developers, organizations, and users of AI and ML systems. Developers should be held accountable for the performance and behavior of their systems, and mechanisms should be established to address potential harms caused by AI and ML technologies. Additionally, norms should encourage organizations to implement mechanisms for redress, complaint handling, and auditing to ensure transparency and accountability in the use of these technologies.

• Human Oversight and Control :     Norms should emphasize the importance of human oversight and control in AI and ML systems. While these technologies can automate various tasks, critical decisions affecting individuals should involve human judgment. Norms should ensure that humans retain the ability to understand, intervene, and override AI and ML systems when necessary, especially in domains with significant social and legal implications, such as healthcare, criminal justice, and autonomous vehicles.

Telecommunication Engineering Centre, which firms up standards for communication products sold in the country, has started a consultation process to develop a framework for fair assessment of artificial intelligence (AI) and machine learning (ML) systems to build public trust.

The technical arm of the Department of Telecommunications has invited inputs from the public by March 8 to develop the framework to resolve various ethical, social and legal issues that may arise from AI and ML systems.

TEC said artificial intelligence and machine learning applications are increasingly being used in all domains such as healthcare, agriculture, smart cities, smart homes, finance, defence, transport, logistics, natural language processing, surveillance, and so on.

"With the aim to build public trust in AI/ ML Systems, TEC is working on Voluntary Fairness Assessment of AI/ ML Systems. Accordingly, TEC is initiating stakeholder consultations and has invited suggestions for framing procedures for assessing fairness for different types of AI/ ML Systems," TEC said in a statement.

Conclusion :                                                 The development and deployment of AI and ML systems should be guided by norms that address the social and legal challenges they present. The proposed norms discussed in this article focus on transparency, ethical data usage, fairness, accountability, and human oversight. By adhering to these norms, developers and organizations can mitigate the risks associated with AI and ML, ensuring that these technologies are deployed responsibly and ethically. However, norms alone are not sufficient; their enforcement and compliance mechanisms need to be established through collaboration between policymakers, industry stakeholders, and the research community. By embracing these norms and working together, we can harness the transformative potential of AI and ML while safeguarding our societal values, rights, and principles.

product review

About the Creator

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2026 Creatd, Inc. All Rights Reserved.