The challenges and opportunities for ethics in generative artificial intelligence in the digital age
Retos y oportunidades de la ética en la inteligencia artificial generativa en la era digital
DOI:
https://doi.org/10.15446/dyna.v92n236.117144Palabras clave:
project management, generative artificial intelligence, algorithmic discrimination, ethical implications, algorithmic justice (en)gestión de proyectos, inteligencia artificial generativa, discriminación algorítmica, implicaciones éticas, justicia algorítmica (es)
Descargas
Generative Artificial Intelligence (GenAI) emerged as a prominent tool in early 2023, renowned for its capability to generate unique texts and images from minimal input. Despite its growing popularity, the ethical implications of this technology remain under-explored. This study aimed to examine the ethical dimensions of GenAI, particularly focusing on the guidelines necessary during the lifecycle of the algorithms that drive it. We employed a qualitative, non-experimental, descriptive, and exploratory methodology. A comprehensive bibliometric analysis encompassed one hundred and fifty bibliographic references. The analysis highlighted significant concerns regarding algorithmic discrimination, justice, data privacy, and the inherent risks associated with this nascent technology. The findings reveal a pressing need for robust protocols to govern the development of GenAI applications to mitigate risks such as algorithmic bias and privacy breaches. Without such frameworks, the broader social and economic impacts of GenAI pose substantial challenges. This paper concludes by discussing the profound ethical implications of digital platforms driven by Generative Artificial Intelligence.
La Inteligencia Artificial Generativa (GenAI) surgió como una herramienta destacada a principios de 2023, famosa por su capacidad para generar textos e imágenes únicos a partir de entradas mínimas. A pesar de su creciente popularidad, las implicaciones éticas de esta tecnología siguen siendo poco exploradas. Este estudio pretende examinar las dimensiones éticas de la GenAI, centrándose especialmente en las directrices necesarias durante el ciclo de vida de los algoritmos que la impulsan. Se empleó una metodología cualitativa, no experimental, descriptiva y exploratoria. Se llevó a cabo un análisis bibliométrico exhaustivo que abarcó ciento cincuenta referencias bibliográficas. El análisis puso de manifiesto preocupaciones significativas en relación con la discriminación algorítmica, la justicia, la privacidad de los datos y los riesgos inherentes asociados a esta tecnología naciente. Los resultados revelan una necesidad acuciante de protocolos sólidos que rijan el desarrollo de aplicaciones GenAI para mitigar riesgos como el sesgo algorítmico y las violaciones de la privacidad. En ausencia de tales marcos, los impactos sociales y económicos más amplios de GenAI plantean desafíos sustanciales. Este documento concluye debatiendo las profundas implicaciones éticas de las plataformas digitales impulsadas por la Inteligencia Artificial Generativa.
Referencias
[1] Porcelli, A., La inteligencia artificial y la robótica: sus dilemas sociales, éticos y jurídicos, Derecho global. Estudios sobre derecho y justicia, 6(16), pp. 49-105, 2020. DOI: https://doi.org/10.32870/dgedj.v6i16.286.
[2] Pieffet, G.P., Inteligencia artificial: pasado, presente y futuro, Revista SayWa, 2(3), 2020. [Online]. Available at: https://revistas.uan.edu.co/index.php/saywa/article/view/798
[3] Parreira-do Prado, M., La proliferación de las fake news y sus algoritmos daña la cultura democrática, Ámbitos. Revista Internacional de Comunicación, 45, pp. 89-106, 2019. DOI: https://doi.org/10.12795/Ambitos.2019.i45.06.
[4] De Lara-García J., Inteligencia Artificial y Justicia, DIVULGARE Boletín Científico de la Escuela Superior de Actopan, 9(17), pp. 41-46, 2022. DOI: https://doi.org/10.29057/esa.v9i17.8093.
[5] Antonov, A., Gestionar la complejidad: la contribución de la UE a la gobernanza de la inteligencia artificial, Revista CIDOB d’Afers Internacionals, (131), pp. 41-68, 2022. DOI: https://orcid.org/10.24241/rcai.2022.131.2.41.
[6] Moreno-Padilla, R.D., La llegada de la inteligencia artificial a la educación, Revista de Investigación en Tecnologías de la Información: RITI, [Online].7(14), pp. 260-270, 2019. Availableat: https://dialnet.unirioja.es/servlet/articulo?codigo=7242777. DOI: https://doi.org/10.36825/RITI.07.14.022
[7] Pérez-Ragone, Á., Justicia artificial: oportunidades y desafíos. Available at: https://www.researchgate.net/profile/Alvaro-Ragone/publication/342052116_Justicia_Virtual_oportunidades_y_desafios/links/5edfcd5392851cf1386f366e/Justicia-Virtual-oportunidades-y-desafios.pdf (accessed 301).
[8] Niebla-Zatarain, J., and García-Feregrino, J., La vigilancia algorítmica y el rol del Estado en la era digital, Alegatos, [Online]. 1(2), pp. 305-316, 2022. Available at: https://alegatos.azc.uam.mx/index.php/ra/article/download/1631/1601
[9] Vestri, G., La inteligencia artificial ante el desafío de la transparencia algorítmica: una aproximación desde la perspectiva jurídico-administrativa, Revista aragonesa de Administración pública, [Online]. (56), pp. 368-398, 2021. Available at: https://dialnet.unirioja.es/servlet/articulo?codigo=7971161.
[10] Terrones-Rodríguez, A.L., Deliberación, responsabilidad y prudencia: fundamentos para construir una ética aplicada a la inteligencia artificial, Estudios, (36), pp. 349-364, 2018. DOI: https://doi.org/10.15517/re.v0i36.33492.
[11] Lledó-Yagüe, F., and Monje-Balmaseda, O., Ética y robótica: principios éticos para la inteligencia artificial y robótica, Revista de Derecho, Empresa y Sociedad (REDS), [Online]. (16), pp. 16-27, 2020. Available at: https://dialnet.unirioja.es/descarga/articulo/7631160.pdf.
[12] Poquet-Catalá, R., Cuarta revolución industrial, automatización y afectación sobre la continuidad de la relación laboral, [Online]. 2020, pp. 167-183. Available at: http://digital.casalini.it/4689318.
[13] Diestra-Quinto, N., Cordova-Villodas, A., Caruajulca-Montero, C., Esquivel-Cueva, D., y Nina-Vera, S., La inteligencia artificial y la toma de decisiones gerenciales, Revista de Investigación Valor Agregado, 8(1), pp. 52-69, 2021. DOI: https://doi.org/10.17162/riva.v8i1.1631.
[14] Martínez-Devia, A., La Inteligencia Artificial, el Big Data y la era digital: una amenaza para los datos personales, Rev. Prop. Inmaterial, [Online]. 27, art. 5, 2019. Available at: https://ssrn.com/abstract=3413806. DOI: https://doi.org/10.18601/16571959.n27.01
[15] Fernandez-Oyuela, J., Lasia, S., Constanzo, B., Haydée, A., Di Iorio, A., Laboratory, I.-L., and Alvarez, B., La informática como herramienta de verificación ante la problemática de las fake news, en: Simposio Argentino sobre Tecnología y Sociedad, Avellaneda, Mar del Plata, Buenos Aires, Argentina, [Online]. 2019. Available at: http://sedici.unlp.edu.ar/handle/10915/89314.
[16] Gregory, R.W., Henfridsson, O., Kaganer, E., and Kyriakou, H., The role of artificial intelligence and data network effects for creating user value, Academy of Management Review, 46(3), pp. 534-551, 2021. DOI: https://doi.org/10.5465/amr.2019.0178.
[17] Raisch, S., and Krakowski, S., Artificial intelligence and management: the automation–augmentation paradox, Academy of Management Review, 46(1), pp. 192-210, 2021. DOI: https://doi.org/10.5465/amr.2018.0072.
[18] Kellogg, K.C., Valentine, M.A., and Christin, A., Algorithms at work: the new contested terrain of control, Academy of Management Annals, 14(1), pp. 366-410, 2020. DOI: https://doi.org/10.5465/annals.2018.0174.
[19] König, C.J., and Langer, M., Machine learning in personnel selection, Handbook of Research on Artificial Intelligence in Human Resource Management, 2022, pp. 149-167. [Online]. Available at: https://www.researchgate.net/profile/Cornelius-Koenig/publication/360069355_Machine_learning_in_personnel_selection/links/62600e848cb84a40ac7c743d/Machine-learning-in-personnel-selection.pdf.
[20] Elsbach, K.D., and Stigliani, I., New information technology and implicit bias, Academy of Management Perspectives, 33(2), pp. 185-206, 2019. DOI: https://doi.org/10.5465/amp.2017.0079.
[21] Tucker, C., Privacy, algorithms, and artificial intelligence. The economics of artificial intelligence: an agenda, pp. 423-437, 2018. DOI: https://doi.org/10.7208/9780226613475-019.
[22] Csaszar, F.A., and Steinberger, T., Organizations as artificial intelligences: the use of artificial intelligence analogies in organization theory, Academy of Management Annals, 16(1), pp. 1-37, 2022. DOI: https://doi.org/10.5465/annals.2020.0192.
[23] Von-Krogh, G., Artificial intelligence in organizations: new opportunities for phenomenon-based theorizing, Academy of Management Discoveries, 4(4), pp. 404-409, 2018. DOI: https://doi.org/10.5465/amd.2018.0084.
[24] Benbya, H., Davenport, T., and Pachidi, S., Artificial intelligence in organizations: current state and future opportunities, MIS Quarterly Executive, 19(4), art. 41983, 2020. DOI: https://doi.org/10.2139/ssrn.3741983
[25] Moser, C., den Hond, F., and Lindebaum, D., Morality in the age of artificially intelligent algorithms, Academy of Management Learning & Education, 21(1), pp. 139-155, 2022. DOI: https://doi.org/10.5465/amle.2020.0287.
[26] Tschang, F.T. and Almirall, E., Artificial intelligence as augmenting automation: implications for employment, Academy of Management Perspectives, 35(4), pp. 642-659, 2021. DOI: https://doi.org/10.5465/amp.2019.0062.
[27] Simon, H.A., Making management decisions: the role of intuition and emotion, Academy of Management Perspectives, 1(1), pp. 57-64, 1987. DOI: https://doi.org/10.5465/ame.1987.4275905.
[28] Lindebaum, D., Vesa, M., and Den-Hond, F., Insights from the machine stops to better understand rational assumptions in algorithmic decision making and its implications for organizations, Academy of Management Review, 45(1), pp. 247-263, 2020. DOI: https://doi.org/10.5465/amr.2018.0181.
[29] Jauernig, J., Uhl, M., and Walkowitz, G., People prefer moral discretion to algorithms: algorithm aversion beyond intransparency, Philosophy & Technology, 35(1), art. 2, 2022. DOI: https://doi.org/10.1007/s13347-021-00495-y.
[30] Bernstein, E., Making transparency transparent: the evolution of observation in management theory, Academy of Management Annals, 11(1), pp. 217-266, 2017. DOI: https://doi.org/10.5465/annals.2014.0076.
[31] von-Krogh, G., Roberson, Q., and Gruber, M., Recognizing and utilizing novel research opportunities with artificial intelligence, 66(2), pp. 367-373, 2023. DOI: https://doi.org/10.5465/amj.2023.4002.
[32] Glikson, E., and Woolley, A.W., Human trust in artificial intelligence: review of empirical research, Academy of Management Annals, 14(2), pp. 627-660, 2020. DOI: https://doi.org/10.5465/annals.2018.0057.
[33] George, G., Howard-Grenville, J., Joshi, A., and Tihanyi, L., Understanding and tackling societal grand challenges through management research, Academy of Management Journal, 59(6), pp. 1880-1895, 2016. DOI: https://doi.org/10.5465/amj.2016.4007.
[34] Pless, N., and Maak, T., Relational intelligence for leading responsibly in a connected world, Academy of Management Proceedings, 2005(1), pp. I1-I6, 2005. DOI: https://doi.org/10.5465/ambpp.2005.18783524.
[35] Purves, D., Jenkins, R., and Strawser, B.J., Autonomous machines, moral judgment, and acting for the right reasons, Ethical Theory and Moral Practice, 18, pp. 851-872, 2015. DOI: https://doi.org/10.1007/s10677-015-9563-y.
[36] Leavitt, K., Schabram, K., Hariharan, P., and Barnes, C.M., Ghost in the machine: on organizational theory in the age of machine learning, Academy of Management Review, 46(4), pp. 750-777, 2021. DOI: https://doi.org/10.5465/amr.2019.0247.
[37] Haibe-Kains, B., Adam, G.A., Hosny, A., and Khodakarami, F., Massive Analysis Quality Control (MAQC), Society Board of Directors. in: MAQC Society Q2 2024 Newsletter 2024. [online]. Available at: https://themaqc.org/about-us/news/#:~:text=In%20addition%20to%20Dr.%20Cheifet%2C%20Dr.%20Benjamin%20Haibe-Kains%2C,begin%20new%20terms%20on%20the%20Board%20of%20Directors
[38] Tom, E. et al., Protecting data privacy in the age of AI-enabled ophthalmology, Translational Vision Science & Technology, 9(2), pp. 36-36, 2020. DOI: https://doi.org/10.1167/tvst.9.2.36.
[39] Panico, C., and Cennamo, C., User preferences and strategic interactions in platform ecosystems, Strategic Management Journal, 43(3), pp. 507-529, 2022. DOI: https://doi.org/10.1002/smj.3149.
[40] Mlshater, M., Exploring the role of artificial intelligence in enhancing academic performance: a case study of ChatGPT, SSRN, 2022. DOI: https://doi.org/10.2139/ssrn.4312358
[41] Cooper, G., Examining science education in ChatGPT: an exploratory study of generative artificial intelligence, Journal of Science Education and Technology, 32(3), pp. 444-452, 2023. DOI: https://doi.org/10.1007/s10956-023-10039-y
[42] Murugesan, S. and Cherukuri, A.K., The rise of generative artificial intelligence and its impact on education: the promises and perils, Computer, 56(5), pp. 116-121, 2023, DOI: https://doi.org/10.1109/MC.2023.3253292
[43] Wang, F., Miao, Q., Li, X., Wang, X., and Lin, Y., What does ChatGPT say: the DAO from algorithmic intelligence to linguistic intelligence, IEEE/CAA Journal of Automatica Sinica, 10(3), pp. 575-579, 2023, DOI: https://doi.org/10.1109/JAS.2023.123486.
[44] Hill-Yardin E.L., Hutchinson, M.R., Laycock, R., and Spencer, S.J., A Chat (GPT) about the future of scientific publishing, Brain Behav Immun, 110, pp. 152-154, 2023, DOI: https://doi.org/10.1016/j.bbi.2023.02.022.
[45] McGee, R.W., Is Chat GPT biased against conservatives? An empirical study, 2023, DOI: https://doi.org/10.2139/ssrn.4359405
[46] Min, B. et al., Recent advances in natural language processing via large pre-trained language models: a survey, ACM Computing Surveys, 2021. DOI: https://doi.org/10.1145/3605943.
[47] Peres, R., Schreier, M., Schweidel, D., and Sorescu, A., On ChatGPT and beyond: how generative artificial intelligence may affect research, teaching, and practice, International Journal of Research in Marketing, 2023. DOI: https://doi.org/10.1016/j.ijresmar.2023.03.001.
[48] Li, Z., The dark side of chatgpt: legal and ethical challenges from stochastic parrots and hallucination, arXiv, 2023. DOI: https://doi.org/10.48550/arXiv.2304.14347.
[49] Wach, K. et al., The dark side of generative artificial intelligence: a critical analysis of controversies and risks of ChatGPT, Entrepreneurial Business and Economics Review, 11(2), pp. 7-24, 2023. DOI: https://doi.org/10.15678/EBER.2023.110201.
[50] Caramancion, K.M., Harnessing the power of ChatGPT to decimate mis/disinformation: using ChatGPT for fake news detection, 2023 IEEE World AI IoT Congress (AIIoT), pp. 0042-0046, 2023, DOI: https://doi.org/10.1109/AIIoT58121.2023.10174450
[51] Pavlik, J.V., Collaborating with ChatGPT: considering the implications of generative artificial intelligence for journalism and media education, Journalism & Mass Communication Educator, 78(1), pp. 84-93, 2023. DOI: https://doi.org/10.1177/10776958221149577.
[52] Alkaissi, H. and McFarlane, S.I., Artificial hallucinations in ChatGPT: implications in scientific writing, Cureus, 15(2), 2023, DOI: https://doi.org/10.7759/cureus.35179.
[53] Victor, B.G., Kubiak, S., Angell, B., and Perron, B.E., Time to move beyond the ASWB licensing exams: can generative artificial intelligence offer a way forward for social work?, Research on Social Work Practice, art. 166125, 2023. DOI: https://doi.org/10.1177/10497315231166125.
[54] Creswell, J.W., Hanson, W.E., Clark-Plano, V.L., and Morales, A., Qualitative research designs: selection and implementation, The Counseling Psychologist, 35(2), pp. 236-264, 2007. DOI: https://doi.org/10.1177/0011000006287390.
[55] Mariani, M., Machado, I., and Nambisan, S., Types of innovation and artificial intelligence: a systematic quantitative literature review and research agenda, Journal of Business Research, 155, art. 113364, 2023. DOI: https://doi.org/10.1016/j.jbusres.2022.113364.
[56] Chintalapati, S. and Pandey, S.K., Artificial intelligence in marketing: a systematic literature review, International Journal of Market Research, 64(1), pp. 38-68, 2022. DOI: https://doi.org/10.1177/14707853211018428.
[57] Zemankova, A., Artificial intelligence in audit and accounting: development, current trends, opportunities and threats-literature review, in: 2019 International Conference on Control, Artificial Intelligence, Robotics & Optimization (ICCAIRO), pp. 148-154, 2019. DOI: https://orcid.org/10.1109/ICCAIRO47923.2019.00031.
[58] Budhwar, P. et al., Human resource management in the age of generative artificial intelligence: perspectives and research directions on ChatGPT, Human Resource Management Journal, 33(3), pp. 606-659, 2023. DOI: https://doi.org/10.1111/1748-8583.12524.
[59] Lund, B. and Wang, T., Chatting about ChatGPT: how may AI and GPT impact academia and libraries?, Library Hi Tech News, 40(3), pp. 26-29, 2023. DOI: https://doi.org/10.1108/LHTN-01-2023-0009.
[60] Day, T., A preliminary investigation of fake peer-reviewed citations and references generated by ChatGPT, The Professional Geographer, pp. 1-4, 2023. [Online]. Available at: https://www.tandfonline.com/doi/full/10.1080/00330124.2023.2190373.
[61] Pegoraro, A., Kumari, K., Fereidooni H., and Sadeghi A., To ChatGPT, or not to ChatGPT: That is the question!, arXiv preprint arXiv: 2304.01487, 2023. DOI: https://doi.org/10.48550/arXiv.2304.01487.
[62] Marjanovic, O., Cecez-Kecmanovic, D., and Vidgen, R., Theorising algorithmic justice, European Journal of Information Systems, 31(3), pp. 269-287, 2022. DOI: https://doi.org/10.1080/0960085X.2021.1934130.
[63] Langevin, M., Grebner, C., Güssregen, S., Sauer, S., Li, Y., Matter, H., and Bianciotto, M., Impact of applicability domains to generative artificial intelligence, ACS Omega, 8(25), pp. 23148-23167, 2023. DOI: https://doi.org/10.1021/acsomega.3c00883.
[64] Mannuru, N.R. et al., Artificial intelligence in developing countries: the impact of generative artificial intelligence (AI) technologies for development, Information Development, art.1200628, 2023. DOI: https://doi.org/10.1177/0266666923120062.
[65] Abbott, R. and Rothman, E., Disrupting creativity: copyright law in the age of generative artificial intelligence, Florida Law Review, Forthcoming, 2022, DOI: https://doi.org/10.2139/ssrn.4185327.
Cómo citar
IEEE
ACM
ACS
APA
ABNT
Chicago
Harvard
MLA
Turabian
Vancouver
Descargar cita
CrossRef Cited-by
1. Mariju L. Gonowon, Leah Donato. (2025). Supremo app: An integrated probation monitoring system for enhanced rehabilitation and supervision in the Philippines. European Journal of Probation, 17(2), p.121. https://doi.org/10.1177/20662203251355691.
2. Kavita Pabreja, Reetu Verma, Ashok Kumar. (2025). GenAI: a scientometric analysis of research trends using biblioshiny and VOSviewer. Discover Artificial Intelligence, https://doi.org/10.1007/s44163-025-00760-5.
3. Carlos Leonidas Yance Carvajal, Freddy Leonardo Garaicoa Fuentes, Xuxa Katherine Cedeño Guillén, Sandra Edith Rodríguez Bejarano, Jorge Carlos Morgan Medina. (2025). Innovation of SMEs in Ecuador: An Approach from Socio-emotional Wealth and the Use of Artificial Intelligence. Data and Metadata, 4, p.854. https://doi.org/10.56294/dm2025854.
Dimensions
PlumX
Visitas a la página del resumen del artículo
Descargas
Licencia
Derechos de autor 2025 DYNA

Esta obra está bajo una licencia internacional Creative Commons Atribución-NoComercial-SinDerivadas 4.0.
El autor o autores de un artículo aceptado para publicación en cualquiera de las revistas editadas por la facultad de Minas cederán la totalidad de los derechos patrimoniales a la Universidad Nacional de Colombia de manera gratuita, dentro de los cuáles se incluyen: el derecho a editar, publicar, reproducir y distribuir tanto en medios impresos como digitales, además de incluir en artículo en índices internacionales y/o bases de datos, de igual manera, se faculta a la editorial para utilizar las imágenes, tablas y/o cualquier material gráfico presentado en el artículo para el diseño de carátulas o posters de la misma revista.




