<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE article
  PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.1 20151215//EN" "https://jats.nlm.nih.gov/publishing/1.1/JATS-journalpublishing1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink"
         xmlns:mml="http://www.w3.org/1998/Math/MathML"
         dtd-version="1.1"
         article-type="research-article"
         xml:lang="en"
         specific-use="sps-1.9">
   <front>
      <journal-meta>
         <journal-id journal-id-type="publisher-id">dyna</journal-id>
         <journal-title-group>
            <journal-title>DYNA</journal-title>
            <abbrev-journal-title abbrev-type="publisher">Dyna rev.fac.nac.minas</abbrev-journal-title>
         </journal-title-group>
         <issn pub-type="ppub">0012-7353</issn>
         <issn pub-type="epub">2346-2183</issn>
         <publisher>
            <publisher-name>Universidad Nacional de Colombia</publisher-name>
         </publisher>
      </journal-meta>
      <article-meta>
         <article-id pub-id-type="doi">10.15446/dyna.v90n230.111700</article-id>
         <article-categories>
            <subj-group subj-group-type="heading">
               <subject>Articles</subject>
            </subj-group>
         </article-categories>
         <title-group>
            <article-title>Prompt Engineering: a methodology for optimizing interactions with AI-Language Models in the field of engineering</article-title>
            <trans-title-group xml:lang="es">
               <trans-title>Ingeniería de instrucciones: una metodología para optimizar interacciones con Modelos de Lenguaje de IA en el campo de ingeniería</trans-title>
            </trans-title-group>
         </title-group>
         <contrib-group>
            <contrib contrib-type="author">
               <contrib-id contrib-id-type="orcid">0000-0003-3043-3037</contrib-id>
               <name>
                  <surname>Velásquez-Henao</surname>
                  <given-names>Juan David</given-names>
               </name>
               <xref ref-type="aff" rid="aff1">
                  <sup>a</sup>
               </xref>
            </contrib>
            <contrib contrib-type="author">
               <contrib-id contrib-id-type="orcid">0000-0002-7750-857X</contrib-id>
               <name>
                  <surname>Franco-Cardona</surname>
                  <given-names>Carlos Jaime</given-names>
               </name>
               <xref ref-type="aff" rid="aff1">
                  <sup>a</sup>
               </xref>
            </contrib>
            <contrib contrib-type="author">
               <contrib-id contrib-id-type="orcid">0000-0002-6025-5940</contrib-id>
               <name>
                  <surname>Cadavid-Higuita</surname>
                  <given-names>Lorena</given-names>
               </name>
               <xref ref-type="aff" rid="aff1">
                  <sup>a</sup>
               </xref>
            </contrib>
            <aff id="aff1">
               <label>a</label>
               <institution content-type="original"> Universidad Nacional de Colombia, sede Medellín, Facultad de Minas, Medellín,  Colombia. jdvelasq@unal.edu.co, cjfranco@unal.edu.co, dlcadavi@unal.edu.co </institution>
               <institution content-type="normalized">Universidad Nacional de Colombia</institution>
               <institution content-type="orgname">Universidad Nacional de Colombia</institution>
               <institution content-type="orgdiv1">Facultad de Minas</institution>
               <addr-line>
                  <city>Medellín</city>
               </addr-line>
               <country country="CO">Colombia</country>
               <email>jdvelasq@unal.edu.co</email>
            </aff>
         </contrib-group>
         <pub-date publication-format="electronic" date-type="pub">
            <day>17</day>
            <month>09</month>
            <year>2024</year>
         </pub-date>
         <pub-date publication-format="electronic" date-type="collection">
            <season>Nov-Dec</season>
            <year>2023</year>
         </pub-date>
         <volume>90</volume>
         <issue>spe230</issue>
         <fpage>9</fpage>
         <lpage>17</lpage>
         <permissions>
            <license xml:lang="en" license-type="open-access"
                     xlink:href="https://creativecommons.org/licenses/by-nc-nd/4.0/">
               <license-p>This is an open-access article distributed under the terms of the Creative Commons Attribution License</license-p>
            </license>
         </permissions>
         <abstract>
            <title>Abstract</title>
            <bold> </bold>
            <p>ChatGPT is a versatile conversational Artificial Intelligence model that responds to user input prompts, with applications in academia and various sectors. However, crafting effective prompts can be challenging, leading to potentially inaccurate or contextually inappropriate responses, emphasizing the importance of prompt engineering in achieving accurate outcomes across different domains. This study aims to address this void by introducing a methodology for optimizing interactions with Artificial Intelligence language models, like ChatGPT, through prompts in the field of engineering. The approach is called GPEI and relies on the latest advancements in this area; and consists of four steps: define the objective, design the prompt, evaluate the response, and iterate. Our proposal involves two key aspects: data inclusion in prompt design for engineering applications and the integration of Explainable Artificial Intelligence principles to assess responses, enhancing transparency. It combines insights from various methodologies to address issues like hallucinations, emphasizing iterative prompt refinement techniques like posing opposing questions and using specific patterns for improvement. This methodology could improve prompt precision and utility in engineering.</p>
         </abstract>
         <trans-abstract xml:lang="es">
            <title>Resumen</title>
            <bold> </bold>
            <p>ChatGPT es un modelo de Inteligencia Artificial conversacional versátil que responde a las indicaciones de entrada del usuario, con aplicaciones en el mundo académico y diversos sectores. Sin embargo, elaborar indicaciones efectivas puede ser un desafío, lo que lleva a respuestas potencialmente inexactas o contextualmente inapropiadas, lo que enfatiza la importancia de la ingeniería de instrucciones para lograr resultados precisos en diferentes dominios. Este estudio pretende abordar este vacío introduciendo una metodología para optimizar las interacciones con modelos de lenguaje de Inteligencia Artificial, como ChatGPT, a través de instrucciones en el campo de la ingeniería. El enfoque es llamado GPEI, y se basa en los últimos avances en esta área, el cual consta de cuatro pasos: definir el objetivo, diseñar el mensaje, evaluar la respuesta e iterar. Nuestra propuesta involucra dos aspectos clave: la inclusión de datos en el diseño rápido para aplicaciones de ingeniería y la integración de principios de Inteligencia Artificial Explicable para evaluar las respuestas, mejorando la transparencia. Combina conocimientos de varias metodologías para abordar problemas como las alucinaciones, enfatizando técnicas iterativas de refinamiento rápido, como plantear preguntas opuestas y usar patrones específicos para mejorar. Esta metodología podría mejorar la precisión y la utilidad rápidas en ingeniería.</p>
         </trans-abstract>
         <kwd-group xml:lang="en">
            <title>Keywords:</title>
            <kwd>ChatGPT</kwd>
            <kwd>prompt engineering</kwd>
            <kwd>large language models</kwd>
            <kwd>prompt design</kwd>
         </kwd-group>
         <kwd-group xml:lang="es">
            <title>Palabras clave:</title>
            <kwd>ChatGPT</kwd>
            <kwd>ingeniería de instrucciones</kwd>
            <kwd>grandes modelos de lenguaje</kwd>
            <kwd>diseño de instrucciones</kwd>
         </kwd-group>
         <counts>
            <fig-count count="3"/>
            <table-count count="5"/>
            <equation-count count="0"/>
            <ref-count count="48"/>
            <page-count count="9"/>
         </counts>
      </article-meta>
   </front>
   <body>
      <sec sec-type="intro">
         <title>1. Introduction</title>
         <bold> </bold>
         <p>ChatGPT is the acronym for Chat Generative Pre-Trained Transformer, which can respond based on user requests using input text prompts [<xref ref-type="bibr" rid="B1">1</xref>]. As a conversational large language model (LLM), ChatGPT has seen extensive applications across various sectors, including academia, and it can produce text that is often indistinguishable from that authored by humans [<xref ref-type="bibr" rid="B2">2</xref>,<xref ref-type="bibr" rid="B3">3</xref>]. ChatGPT is a generative AI platform that allows users to input text prompts, which generate responses based on the knowledge accumulated during the training phase [<xref ref-type="bibr" rid="B4">4</xref>]. Typical applications include text generation, summarization, and translation [<xref ref-type="bibr" rid="B5">5</xref>].</p>
         <p>A prompt is a text describing a set of instructions that customize, refine, or enhance the capabilities of a GPT model [<xref ref-type="bibr" rid="B6">6</xref>]. Effective prompts are characterized by the following fundamental principles [<xref ref-type="bibr" rid="B7">7</xref>]: clarity and precision, contextual information, desired format, and verbosity control. However, writing effective prompts seems complicated for non-technical users, requiring creativity, intuition, and iterative refinement [<xref ref-type="bibr" rid="B7">7</xref>]. The problem becomes more significant when it is necessary to incorporate precise information to solve tasks in specific contexts.</p>
         <p>Consequently, the answers can be vague, imprecise, factually incorrect, or contextually inappropriate [<xref ref-type="bibr" rid="B5">5</xref>] when an inadequate prompt is used. In this context, prompt engineering emerges as a discipline to design prompts by which users can program a Large Language Model, such as ChatGPT, to produce accurate answers [<xref ref-type="bibr" rid="B8">8</xref>,9].</p>
         <p>During the last year, much gray literature has been generated that mainly presents prompt templates for specific workflows and tasks in marketing, advertising, SEO, and text translation. There are also many prompt templates for using these technologies in everyday tasks such as travel preparation. A quick analysis of this material allows us to conclude that the considerations made tend to be repetitive and need more depth to use these technologies as assistants in professional practice and engineering education. On the other hand, in the most relevant literature, efforts have been made to formalize the construction of prompts, as will be discussed later. However, research has concentrated mainly on practice, research, teaching, and scientific publication in health. Consequently, there is a gap when considering the opportunities of using GPT and conversational assistants in engineering; a similar conclusion is achieved in [<xref ref-type="bibr" rid="B10">10</xref>] by analyzing the potential uses of LLM in business process management.</p>
         <p>This work seeks to fill this gap. We propose a methodology for developing prompts in engineering based on an iterative process based on the main developments that have been presented to date on this topic.</p>
         <p>The rest of this work is organized as follows: Section 2 overviews the current discourse on GPT and prompt engineering. Section 3 presents the methodology employed to propose a framework for engineering prompts, elaborating and illustrating it further in Section 4. Finally, the paper is concluded in Section 5.</p>
      </sec>
      <sec>
         <title>2. Literature review</title>
         <bold> </bold>
         <p>Generative language models are part of the broader category of pre-trained Generative Transformers (GPT) and are part of deep learning models [<xref ref-type="bibr" rid="B11">11</xref>]. Their competence in interpreting and producing human language is based on the principles and techniques of natural language processing (NLP) [<xref ref-type="bibr" rid="B12">12</xref>]. The GPT approach is based on the principles of deep learning and NLP. Prompt engineering is essential to use generative language models effectively [<xref ref-type="bibr" rid="B13">13</xref>].</p>
         <p>To evaluate generative language models, sentiment analysis and opinion mining are usually used, where users rate their performance and express their opinions. This evaluation allows for continuous improvement of these systems. However, there are also profound ethical implications to its use, given its potential for disseminating biased or erroneous data and its implications for plagiarism and copyright [<xref ref-type="bibr" rid="B14">14</xref>,<xref ref-type="bibr" rid="B15">15</xref>]. In essence, generative language models offer substantial opportunities in various fields but require careful implementation, ethical oversight, and responsive adaptation to ensure their positive impact and reliability [<xref ref-type="bibr" rid="B16">16</xref>-<xref ref-type="bibr" rid="B19">19</xref>].</p>
         <sec>
            <title>
               <italic>2.1. Discussion about ChatGPT</italic>
            </title>
            <bold> </bold>
            <p>The current academic discussion about ChatGPT and LLM revolves around three axes: Generative Artificial Intelligence, Education, and Ethics.</p>
            <p>Generative artificial intelligence (Generative AI), based on transformer architectures, is rapidly advancing in the domain of artificial intelligence [<xref ref-type="bibr" rid="B20">20</xref>,<xref ref-type="bibr" rid="B21">21</xref>]. These models can generate content in various formats, such as text, images, and more, that closely resemble what humans produce [<xref ref-type="bibr" rid="B20">20</xref>,<xref ref-type="bibr" rid="B21">21</xref>]. These elements profoundly impact professional practice and education [<xref ref-type="bibr" rid="B3">3</xref>,<xref ref-type="bibr" rid="B22">22</xref>]. Many concerns relate to academic and professional integrity and student evaluation [<xref ref-type="bibr" rid="B22">22</xref>,<xref ref-type="bibr" rid="B23">23</xref>]. The dual nature of technology (enabling academic dishonesty while potentially enriching pedagogical approaches) forces institutions to critically evaluate their assessment methodologies to ensure content accuracy and authenticity [<xref ref-type="bibr" rid="B22">22</xref>,<xref ref-type="bibr" rid="B23">23</xref>]. The foray of generative AI into the educational space has motivated academics to rethink traditional educational frameworks, generating opportunities and challenges [<xref ref-type="bibr" rid="B24">24</xref>,<xref ref-type="bibr" rid="B25">25</xref>]. In summary, the integration of generative artificial intelligence at the professional and academic levels requires rigorous evaluation, continuous research, and adaptation strategies to take advantage of its advantages and address the associated challenges [<xref ref-type="bibr" rid="B22">22</xref>-<xref ref-type="bibr" rid="B25">25</xref>].</p>
            <p>Education is one of the areas most impacted by the popularization of GPTs, mainly by conversational agents powered by AI [<xref ref-type="bibr" rid="B26">26</xref>]. There are already many publications on this aspect in the most relevant literature, particularly for medical education. Since conversational models can generate human-like text, they can be used in curriculum design, as part of teaching methodologies, and to create personalized learning resources [<xref ref-type="bibr" rid="B26">26</xref>]. These capabilities facilitate the teaching of complex concepts and help educators monitor and refine their pedagogical approaches [<xref ref-type="bibr" rid="B26">26</xref>]. Beyond the educational field, conversational agents can offer relevant and accurate information to individuals and communities, thus demonstrating their usefulness as a complementary information resource that improves access to information and decision-making [<xref ref-type="bibr" rid="B14">14</xref>]. As in other cases discussed, there are essential concerns about possible bias, security, and ethical implications associated with using these tools [<xref ref-type="bibr" rid="B17">17</xref>,<xref ref-type="bibr" rid="B15">15</xref>]. For this reason, it is imperative to guarantee the accurate, transparent, and ethically sound deployment of these tools, especially for public consultation in general [<xref ref-type="bibr" rid="B16">16</xref>].</p>
            <p>Models, such as ChatGPT, have generated immense interest due to their transformative potential in different sectors, such as administration, finance, health, and education [<xref ref-type="bibr" rid="B27">27</xref>]. However, their integration has raised complex questions, particularly around authorship, plagiarism, and the distinction between human- and AI-generated content [<xref ref-type="bibr" rid="B27">27</xref>-<xref ref-type="bibr" rid="B29">29</xref>]. One of the fundamental issues is whether AI systems should be credited as authors in academic writings [27,28,30]. The distinction between human-written content and AI-generated content becomes more blurred, emphasizing concerns about plagiarism [<xref ref-type="bibr" rid="B27">27</xref>,<xref ref-type="bibr" rid="B28">28</xref>,<xref ref-type="bibr" rid="B31">31</xref>]. AI models can generate seemingly genuine but potentially misleading scientific texts [<xref ref-type="bibr" rid="B32">32</xref>]. In response, there is an emphasis on greater scrutiny, transparency, and ethical standards in using AI in research [<xref ref-type="bibr" rid="B31">31</xref>,<xref ref-type="bibr" rid="B32">32</xref>]. In this way, it is necessary to achieve a balance between the advantages of AI and ethical considerations becomes paramount [<xref ref-type="bibr" rid="B27">27</xref>,<xref ref-type="bibr" rid="B29">29</xref>,<xref ref-type="bibr" rid="B30">30</xref>,<xref ref-type="bibr" rid="B33">33</xref>], which requires an emphasis on transparency, fairness, and initiatives of open source [<xref ref-type="bibr" rid="B32">32</xref>,<xref ref-type="bibr" rid="B33">33</xref>].</p>
         </sec>
         <sec>
            <title>
               <italic>2.2. Discussion about prompt engineering</italic>
            </title>
            <bold> </bold>
            <p>Prompt engineering is a set of techniques and methods to design, write, and optimize instructions for LLM, called prompts, such that the model answers will be precise, concrete, accurate, replicable, and factually correct [<xref ref-type="bibr" rid="B8">8</xref>,<xref ref-type="bibr" rid="B9">9</xref>,<xref ref-type="bibr" rid="B18">18</xref>]. Prompts are understood as a form of programming because they can customize the outputs and interactions with an LLM [<xref ref-type="bibr" rid="B9">9</xref>]. They involve adapting instructions in natural language, obtaining the desired responses, guaranteeing contextually accurate results, and increasing the usefulness of generative language models in various applications [<xref ref-type="bibr" rid="B13">13</xref>]. Its applications include fields such as medical education, radiology, and science education [<xref ref-type="bibr" rid="B11">11</xref>,<xref ref-type="bibr" rid="B12">12</xref>,<xref ref-type="bibr" rid="B34">34</xref>]. These systems can be used, for example, as virtual assistants for student care or report writing, transforming complex information into a coherent narrative [<xref ref-type="bibr" rid="B34">11</xref>,<xref ref-type="bibr" rid="B26">26</xref>,<xref ref-type="bibr" rid="B34">34</xref>].</p>
            <p>Efforts are underway to standardize the terminology and concepts within prompt engineering, with various classifications of prompts emerging based on different criteria.</p>
            <p>According to the structure, prompts can be formulated using open-ended or closed-ended questions. Open-ended questions do not have a specific or limited answer and allow for a more extensive and detailed response from the model. They are helpful, for example, for critical reading tasks [<xref ref-type="bibr" rid="B35">35</xref>]. In contrast, closed-ended questions typically have specific and limited answers, often yes or no, multiple-choice, or a short and defined response. For example, instead of asking “What is the capital of Italy?” (close-ended question), an open-ended question might be, “Tell me about the history and culture of Rome.”</p>
            <p>According to the information provided, prompts can be categorized into levels 1 to 4. The first level consists of straightforward questions, while the second level introduces additional context about the writer and the language model. The third level includes provided examples for the language model to reference, and the fourth level allows the language model to break down the request into individual components (much like requesting a step-by-step solution to a mathematical problem, offering the language model a more structured way to handle the prompt for improved accuracy) [<xref ref-type="bibr" rid="B36">36</xref>].</p>
            <p>Comparably, prompts have also been classified as instructive, system, question-answer, and mixed. Instructive prompts start with a verb that specifies the action to be performed by the system. System prompts provide the system with a starting point or context to develop content. Question-answer prompts formulate a wh* type question. Mixed prompts blend two or more techniques mentioned above [<xref ref-type="bibr" rid="B8">8</xref>].</p>
            <p>According to the number of examples provided, instructions are classified as zero-shot and few-shot prompts, where "shot" is equivalent to an example [<xref ref-type="bibr" rid="B36">36</xref>]. Zero-shot prompts are used in situations where it is not necessary to train the LLM or present sample outputs [<xref ref-type="bibr" rid="B37">37</xref>]. Examples of zero-shot prompts include prompts used to translate or summarize texts; other examples of zero-shot prompts are simple questions that are answered with the internal knowledge of the LLM, such as, for example, "define prompt engineering." Few-shot prompts cover prompts with more detailed information.</p>
            <p>Reproducibility is a desired characteristic, but LLM produces an inherent random response due to its intrinsic design [<xref ref-type="bibr" rid="B7">7</xref>]. </p>
            <p>Many sources recognize that the development of prompts is an iterative process. Also, it is desired that the prompt text must be clear, concise, and to the point, avoiding unnecessary complexity [<xref ref-type="bibr" rid="B38">38</xref>].</p>
            <p>Following the discussion, poorly designed prompts generate vague, biased, misleading, or ambiguous responses. Another major problem is hallucinations [<xref ref-type="bibr" rid="B5">5</xref>,<xref ref-type="bibr" rid="B9">9</xref>]. Many researchers highlight the necessity of verifying facts presented in the response of conversational LLMs, such as academic citations.</p>
         </sec>
      </sec>
      <sec sec-type="methods">
         <title>3. Methodology</title>
         <bold> </bold>
         <p>We conducted a comprehensive literature search using the Scopus database to identify scientific papers on prompt engineering. Scopus is renowned as one of the largest repositories of peer-reviewed scientific literature, and it encompasses a broad spectrum of disciplines, including science, technology, medicine, and social sciences [<xref ref-type="bibr" rid="B39">39</xref>].</p>
         <p>We designed and used the following search equation, which retrieved 184 documents.</p>
         <p>TITLE ( ( prompt AND ChatGPT ) OR ( prompt AND engineering ) ) OR KEY ( ( prompt AND ChatGPT ) OR ( prompt AND engineering ) ) </p>
         <p>The analysis of the documents and the valuable findings for a prompt design methodology are presented below.</p>
      </sec>
      <sec sec-type="results">
         <title>4. Results</title>
         <bold> </bold>
         <sec>
            <title>
               <italic>4.1. Analysis</italic>
            </title>
            <bold> </bold>
            <p>The majority of the literature found can be categorized into two groups: specific applications (particularly in the field of medicine) and guidelines and recommendations for prompt design [<xref ref-type="bibr" rid="B36">36</xref>] [<xref ref-type="bibr" rid="B38">38</xref>]. Only seven papers go beyond prompt design to propose a methodology for interacting with</p>
            <p>
               <table-wrap id="t1">
                  <label>Table 1</label>
                  <caption>
                     <title>Components of methodologies.</title>
                  </caption>
                  <table>
                     <colgroup>
                        <col/>
                        <col/>
                        <col/>
                        <col/>
                        <col/>
                        <col/>
                     </colgroup>
                     <thead>
                        <tr>
                           <th align="justify">Authors</th>
                           <th align="center">Application</th>
                           <th align="center">Guidelines</th>
                           <th align="center">Data</th>
                           <th align="center">Evaluation Criteria</th>
                           <th align="center">Iteration</th>
                        </tr>
 
                     </thead>
                     <tbody>
                        <tr>
                           <td align="justify">Chang (2023) [35]</td>
                           <td align="center">Critical Reading</td>
                           <td align="center">X</td>
                           <td align="center"> </td>
                           <td align="center">X</td>
                           <td align="center">X</td>
                        </tr>
 
                        <tr>
                           <td align="justify">Eager and Brunton, 2023 [18]</td>
                           <td align="center">Education</td>
                           <td align="center">X</td>
                           <td align="center">X</td>
                           <td align="center">X</td>
                           <td align="center">X</td>
                        </tr>
 
                        <tr>
                           <td align="justify">Giray, 2023 [8]</td>
                           <td align="center">Academic Writing</td>
                           <td align="center">X</td>
                           <td align="center">X</td>
                           <td align="center"> </td>
                           <td align="center"> </td>
                        </tr>
 
                        <tr>
                           <td align="justify">Jha et al., 2023 [5]</td>
                           <td align="center">General</td>
                           <td align="center"> </td>
                           <td align="center"> </td>
                           <td align="center">X</td>
                           <td align="center">X</td>
                        </tr>
 
                        <tr>
                           <td align="justify">Lo, 2023 [19]</td>
                           <td align="center">General</td>
                           <td align="center">X</td>
                           <td align="center"> </td>
                           <td align="center">X</td>
                           <td align="center">X</td>
                        </tr>
 
                        <tr>
                           <td align="justify">Shieh, 2023 [40]</td>
                           <td align="center">General</td>
                           <td align="center"> </td>
                           <td align="center"> </td>
                           <td align="center"> </td>
                           <td align="center">X</td>
                        </tr>
                     </tbody>
                  </table>
                  <table-wrap-foot>
                     <fn id="TFN1">
                        <p>Source: The authors</p>
                     </fn>
                  </table-wrap-foot>
               </table-wrap>
            </p>
            <p>chatGPT using prompts. This situation can be attributed to the large number of documents and gray literature offering compilations of prompt examples and templates for specific tasks such as marketing, advertising, or text translation. These guides are designed for the non-technical user, and they play an essential role in popularizing ChatGPT and LLM, although they may not be part of the scientific literature. <xref ref-type="table" rid="t1">Table 1</xref> presents the components of these methodologies and other pertinent sources within the gray literature.</p>
            <p>Several methodologies have been developed for general applications (though they are typically published in medical field journals), while some have been proposed for specific domains. We did not find any methodologies proposed for the field of engineering.</p>
            <p>Most of the methodologies provide guidelines for prompt design and incorporate interactiveness. However, given the nature of the application field, some methodologies are based on specific prompt designs, such as open-ended questions for critical reading [<xref ref-type="bibr" rid="B35">35</xref>] or persona design for academic writers [<xref ref-type="bibr" rid="B8">8</xref>]. Only two methodologies include providing data within the prompt for the system's response retrieval.</p>
            <p>While some methodologies involve evaluating the response before iterating on the prompt [<xref ref-type="bibr" rid="B5">5</xref>,<xref ref-type="bibr" rid="B18">18</xref>,<xref ref-type="bibr" rid="B19">19</xref>,35], not all of them provide components for conducting this assessment [<xref ref-type="bibr" rid="B5">5</xref>,<xref ref-type="bibr" rid="B18">18</xref>]. </p>
            <p>Even though some methodologies hold promise, they are not currently directly applicable to prompt design. For instance, in [<xref ref-type="bibr" rid="B5">5</xref>,<xref ref-type="bibr" rid="B41">41</xref>], a methodology is proposed that may be useful for the internal programming of LLMs but not for human-user interaction with such systems. Furthermore, in [<xref ref-type="bibr" rid="B42">42</xref>], a hermeneutic exercise is conducted without a proposed methodology that can be applied to other domains.</p>
            <p>As of the publication date of this paper, we have not found official documentation from Google on recommendations for interacting with Bard (the Artificial Intelligence system developed by the company [<xref ref-type="bibr" rid="B43">43</xref>]). Similarly, we have not come across official documentation from Microsoft regarding recommendations for interacting with  hatGPT through their Bing browser [<xref ref-type="bibr" rid="B44">44</xref>].</p>
         </sec>
         <sec>
            <title>
               <italic>4.2. Proposal</italic>
            </title>
            <bold> </bold>
            <p>We collected the guidelines, recommendations, and common elements from the various methodologies for prompt design that were analyzed earlier. Furthermore, we also considered issues related to hallucinations and low-quality responses and integrated those elements into a methodology for interacting with ChatGPT through prompts for the engineering field.</p>
            <p>The methodology is called GPEI, which stands for Goal Prompt Evaluation Iteration. GPEI consists of 4 steps: (1) define a goal, (1) design the prompt, (3) evaluate the answer, and (4) iterate, as <xref ref-type="fig" rid="f1">Figure 1</xref> shows. The methodology is explained below.</p>
            <sec>
               <title>4.2.1. Step 1: Define a goal</title>
               <bold> </bold>
               <p>The process begins by defining the goal to be achieved by the AI model. The goal will determine the structure of the prompt to be designed in the following step and assist in evaluating the quality of the system’s response before further iterations. Despite its significance, this activity is explicitly outlined only in one of the analyzed methodologies [<xref ref-type="bibr" rid="B18">18</xref>]; in the remaining methodologies, the objective is disaggregated within the prompt design.</p>
            </sec>
            <sec>
               <title>4.2.2. Step 2: Prompt designing</title>
               <bold> </bold>
               <p>The first step consists of the design of the prompt. In [<xref ref-type="bibr" rid="B9">9</xref>], a catalog of prompt patterns is presented and discussed. The authors describe 12 patterns for prompt designing; also, they identify for each pattern the intent, motivation, key ideas, and consequences of the approach. Five of these patterns are oriented toward customizing the output obtained from the system: output Automater, persona, visualization generator, recipe, and template.</p>
               <p>
                  <fig id="f1">
                     <label>Figure 1</label>
                     <caption>
                        <title>GPEI Methodology</title>
                     </caption>
                     <graphic xlink:href="v90nspe230a02image001.png"/>
                     <attrib>Source: The authors.</attrib>
                  </fig>
               </p>
               <p>
                  <fig id="f2">
                     <label>Figure 2</label>
                     <caption>
                        <title>Example of a prompt template for including specific information.</title>
                     </caption>
                     <graphic xlink:href="v90nspe230a02image002.jpg"/>
                     <attrib>Source: The authors.</attrib>
                  </fig>
               </p>
               <p>According to the established goal, the user should use the pattern that best suits their purpose. For example, if the persona pattern is chosen, the prompt should have the following elements [<xref ref-type="bibr" rid="B36">36</xref>]:</p>
               <p>
                  <list list-type="bullet">
                     <list-item>
                        <p>The definition of the role of the person who is asking the question.</p>
                     </list-item>
                     <list-item>
                        <p>The definition of a role or a context: "you are …", "you act as …"</p>
                     </list-item>
                     <list-item>
                        <p>The definition of what is required: Your task is … / Write … / Rephase … / </p>
                     </list-item>
                     <list-item>
                        <p>A description of the output format (for example, a paragraph, a bulleted list, a table, JSON data, etc).</p>
                     </list-item>
                     <list-item>
                        <p>A description of limits for the expected results.</p>
                     </list-item>
                  </list>
               </p>
               <p>In an engineering context, we advise that the prompt includes the necessary data for the system to generate responses. An example is presented in <xref ref-type="fig" rid="f2">Fig. 2</xref>.</p>
               <p>The literature provides some recommendations for prompt design.</p>
               <p>
                  <list list-type="bullet">
                     <list-item>
                        <p>Extending prompts with phrases such as "within the scope" and "Let's think step by step … to reach conclusions" could improve the response of the system.</p>
                     </list-item>
                     <list-item>
                        <p>A strategy for complex responses involves asking LLM to break the result into small chunks [<xref ref-type="bibr" rid="B45">45</xref>].</p>
                     </list-item>
                     <list-item>
                        <p>Think that prompts are instructions in the context of computer programming, such that it is unnecessary to be polite; avoid phrases such as "Please, give me …" [<xref ref-type="bibr" rid="B45">45</xref>].</p>
                     </list-item>
                     <list-item>
                        <p>Strategies, such as the Tree of Thoughts [<xref ref-type="bibr" rid="B41">41</xref>], can be used to structure prompts for complex problems. </p>
                     </list-item>
                     <list-item>
                        <p>Frameworks, such as CLEAR, propose a Concise, Logical, Explicit, Adaptive, and Reflective process to optimize interactions with AI language models like ChatGPT through prompts.</p>
                     </list-item>
                     <list-item>
                        <p>OpenAI suggests separating instructions from the context through a break in the prompt or using quotation marks to reference textual elements from the context [<xref ref-type="bibr" rid="B40">40</xref>].</p>
                     </list-item>
                     <list-item>
                        <p>Adding examples to articulate the desired output could also be useful [<xref ref-type="bibr" rid="B40">40</xref>].</p>
                     </list-item>
                  </list>
               </p>
               <p>Moreover, the prompt can be used as a template for solving similar problems or generating new prompts for similar problems (template pattern [<xref ref-type="bibr" rid="B9">9</xref>]). Disambiguation can be avoided by providing a detailed description or scope of the problem or the data, as exemplified in <xref ref-type="fig" rid="f3">Fig. 3</xref>.</p>
               <p>A more comprehensive guide for developing template prompts can be found in [<xref ref-type="bibr" rid="B35">35</xref>]. A very important recommendation is to use the designed prompt in the chat and then collect and save the system output. At this point, it is imperative to preserve the history of the process design to realize ex-post evaluations of the performance of the designed prompts.</p>
            </sec>
            <sec>
               <title>4.2.3. Step 3: Evaluate the answer.</title>
               <bold> </bold>
               <p>Realize a formal verification of the answer provided by ChatGPT in terms of the design criteria specified in Step 1. Evaluating the obtained response is not trivial since it can potentially reduce system hallucination.</p>
               <p>The following questions could be helpful in this purpose [<xref ref-type="bibr" rid="B19">19</xref>]:</p>
               <disp-quote>
                  <p>Is the answer as complete as expected?</p>
               </disp-quote>
               <disp-quote>
                  <p>Is the answer as accurate as expected?</p>
               </disp-quote>
               <disp-quote>
                  <p>Is the answer as relevant as expected?</p>
               </disp-quote>
               <disp-quote>
                  <p>Were the specified limits met?</p>
               </disp-quote>
               <disp-quote>
                  <p>Does the answer have elements that may be factually incorrect (hallucinations)?</p>
               </disp-quote>
               <disp-quote>
                  <p>Does the answer have elements that may be contextually inappropriate?</p>
               </disp-quote>
               <p>The available literature offers various methods for assessing ChatGPT's responses. For instance, one approach involves rephrasing a question to elicit different responses, which can help identify inconsistencies among multiple answers. Additionally, requesting additional evidence, such as querying top-k information sources and having the language model rate the credibility of each source, is another strategy. Also, one can seek opposing viewpoints from the language model, including their sources and credibility, to evaluate the strength of a different perspective [<xref ref-type="bibr" rid="B35">35</xref>]. It is also possible.</p>
               <p>In [<xref ref-type="bibr" rid="B5">5</xref>], formal methods are integrated into the design of prompts for critical and autonomous systems with the aim of self-monitoring and automatic detection of errors and hallucinations. Among the recommendations, the authors suggest that one could consider providing counterexamples in the prompt to prevent hallucinations [<xref ref-type="bibr" rid="B5">5</xref>].</p>
               <p>Furthermore, it is possible to design other prompts to evaluate a response. For instance, prompts falling under the error identification category in [<xref ref-type="bibr" rid="B9">9</xref>] involve generating a list of facts the output depends on that should be fact-checked and then introspecting on its output to identify any errors.</p>
               <p>A potentially useful strategy to evaluate the answer of an LLM is to incorporate elements commonly used to design Explainable AI systems (XAI) [<xref ref-type="bibr" rid="B46">46</xref>]. We propose the following guidelines to incorporate these principles to evaluate the answer's quality:</p>
               <p>
                  <fig id="f3">
                     <label>Figure 3</label>
                     <caption>
                        <title>Prompt example for including specific information.</title>
                     </caption>
                     <graphic xlink:href="v90nspe230a02image003.png"/>
                  </fig>
               </p>
               <p>
                  <list list-type="bullet">
                     <list-item>
                        <p>Ask for the reasoning behind a particular answer.</p>
                     </list-item>
                     <list-item>
                        <p>Verify that the prompt asking the LLM provides a simple, direct, and unambiguous response.</p>
                     </list-item>
                     <list-item>
                        <p>Verify the prompt requires the LLM to justify the answer.</p>
                     </list-item>
                     <list-item>
                        <p>Ask the LLM to break down the answer in bullets, steps, or stages for complex answers.</p>
                     </list-item>
                     <list-item>
                        <p>Inquire about the data sources or training data.</p>
                     </list-item>
                  </list>
               </p>
            </sec>
            <sec>
               <title>4.2.4. Step 4: Iterate</title>
               <bold> </bold>
               <p>If the answer fails to meet the evaluation criteria, prompt modification is required, which entails adjusting the design obtaining and assessing a new answer. This iterative refinement process continues until the system's response is deemed adequate.</p>
               <p>As suggested in [<xref ref-type="bibr" rid="B35">35</xref>], the iterative process could be as simple as posing the same question from an opposing perspective. However, other strategies that use specific prompt patterns can also be helpful, like the patterns within the improvement group focus on enhancing the quality of both input and output. Patterns in this category involve suggesting alternative ways for the Language Model (LLM) to accomplish a user-specified task, instructing the LLM to automatically propose a series of subquestions for users to answer before combining these subquestion answers to produce a response to the overarching question (similar to [<xref ref-type="bibr" rid="B41">41</xref>]) and mandating the LLM to automatically rephrase user queries when it declines to provide an answer.</p>
               <p>Moreover, the utilization of particular prompt patterns can prove advantageous. For example, patterns found within the Improvement category in [<xref ref-type="bibr" rid="B9">9</xref>] focus on enhancing input and output quality. These patterns suggest alternative methods for the LLM to execute a task specified by the user, directing the LLM to autonomously present a sequence of subquestions for users to address before amalgamating these subquestion responses to formulate a comprehensive answer to the primary inquiry. Additionally, they instruct the LLM to automatically rephrase user queries in cases where it refuses to produce a solution.</p>
            </sec>
         </sec>
         <sec>
            <title>
               <italic>4.3. Application</italic>
            </title>
            <bold> </bold>
            <p>For this example, we will seek the solution to a well-known engineering problem: the knapsack problem [<xref ref-type="bibr" rid="B47">47</xref>]. In this problem, one must decide which items to take on a trip based on the utility value of each item, the weight of each item, and a limit on the total weight that can be carried.</p>
            <sec>
               <title>4.3.1. Step 1: Define a goal.</title>
               <bold> </bold>
               <p>From a given list of items, one should find the combination that has the highest total benefit without exceeding a total weight of 10 kg.</p>
            </sec>
            <sec>
               <title>4.3.2. Step 2: Design the prompt.</title>
               <bold> </bold>
               <p>The designed prompt and the corresponding response are presented in <xref ref-type="table" rid="t2">Table 2</xref>.</p>
               <p>
                  <table-wrap id="t2">
                     <label>Table 2</label>
                     <caption>
                        <title>First iteration for the application case. </title>
                     </caption>
                     <table>
                        <colgroup>
                           <col/>
                           <col/>
                        </colgroup>
                        <thead>
                           <tr>
                              <th align="justify">Role</th>
                              <th align="justify">Dialogue</th>
                           </tr>
 
                        </thead>
                        <tbody>
                           <tr>
                              <td align="justify">Prompt</td>
                              <td align="justify">Given the items below, choose the set of items that has the highest total value, as long as the sum of the weights of those items does not exceed 10 kilograms: Item 1: Value - 8, Weight - 3 kg; Item 2: Value - 6, Weight - 2 kg; Item 3: Value - 10, Weight - 5 kg; Item 4: Value - 3, Weight - 1 kg; Item 5: Value - 7, Weight - 4 kg; Item 6: Value - 9, Weight - 3 kg; Item 7: Value - 5, Weight - 2 kg; Item 8: Value - 4, Weight - 1 kg; Item 9: Value - 2, Weight - 6 kg; Item 10: Value - 10, weight - 10 kg</td>
                           </tr>
 
                           <tr>
                              <td align="justify">GPT</td>
                              <td align="justify">Given the items provided, the optimal selection to achieve the highest total value without exceeding 10 kilograms would be: 
* Item 1: Value - 8, Weight - 3 kg
* Item 2: Value - 6, Weight - 2 kg
* Item 3: Value - 10, Weight - 5 kg
* Total value: 24; Total weight: 10 kg</td>
                           </tr>
                        </tbody>
                     </table>
                     <table-wrap-foot>
                        <fn id="TFN2">
                           <p>Source: The authors.</p>
                        </fn>
                     </table-wrap-foot>
                  </table-wrap>
               </p>
               <p>
                  <table-wrap id="t3">
                     <label>Table 3</label>
                     <caption>
                        <title>Second iteration for the application case. </title>
                     </caption>
                     <graphic xlink:href="v90nspe230a02image004.jpg"/>
                     <table-wrap-foot>
                        <fn id="TFN3">
                           <p>Source: The authors.</p>
                        </fn>
                     </table-wrap-foot>
                  </table-wrap>
               </p>
            </sec>
            <sec>
               <title>4.3.3. Step 3: Evaluate the answer.</title>
               <bold> </bold>
               <p>The items meet the imposed restriction; however, when independently verified using an optimization tool, the following is the optimal solution: </p>
               <p>
                  <list list-type="bullet">
                     <list-item>
                        <p>Item 1: Value - 8, Weight - 3 kg. </p>
                     </list-item>
                     <list-item>
                        <p>Item 2: Value - 6, Weight - 2 kg.</p>
                     </list-item>
                     <list-item>
                        <p>Item 4: Value - 3, Weight - 1 kg.</p>
                     </list-item>
                     <list-item>
                        <p>Item 6: Value - 9, Weight - 3 kg.</p>
                     </list-item>
                     <list-item>
                        <p>Item 8: Value - 4, Weight - 1 kg.</p>
                     </list-item>
                     <list-item>
                        <p>Total value: 30.</p>
                     </list-item>
                     <list-item>
                        <p>Total weight: 10 kg</p>
                     </list-item>
                  </list>
               </p>
               <p>Let's see if we can find a better solution than the one found in iteration 1 using ChatGPT.</p>
            </sec>
            <sec>
               <title>4.3.4. Step 4: Iterate</title>
               <bold> </bold>
               <p>A new designed prompt, presented int <xref ref-type="table" rid="t3">Table 3</xref>, is used for the second iteration. Although a better answer was achieved in this iteration than the previous one, ChatGPT still does not find the system's optimum. Other strategies will be employed in iterations 3 and 4 to aid in solving this problem.</p>
               <p>For the iteration 3, the strategy is changed from asking for the solution to requesting the formulation of the problem (See <xref ref-type="table" rid="t4">Table 4</xref>). In the iteration 4, we designed a prompt to request the generation of the code necessary for Google Collaboratory [48] to solve the previous problem. The dialogue is presented in <xref ref-type="table" rid="t5">Table 5</xref>.</p>
               <p>
                  <table-wrap id="t4">
                     <label>Table 4</label>
                     <caption>
                        <title>Iteration 3 for the application case.</title>
                     </caption>
                     <table>
                        <colgroup>
                           <col/>
                           <col/>
                        </colgroup>
                        <thead>
                           <tr>
                              <th align="justify">Role</th>
                              <th align="justify">Dialogue</th>
                           </tr>
 
                        </thead>
                        <tbody>
                           <tr>
                              <td align="justify">Prompt</td>
                              <td align="justify">Formulate the objective function and the correspondent constraints for the following problem: 
Given a number of items, each characterized by a value and a weight in kilograms, find the set of items where the sum of their values is maximum, and the sum of their weights is less than or equal to 10 kilograms. The sets can have any number of elements, as long as they are not repeated.</td>
                           </tr>
 
                           <tr>
                              <td align="justify">GPT</td>
                              <td align="justify">This is a classic Knapsack problem.
<bold>Objective function:</bold>
Maximize Z where:
Z=
i=1
n

w
i

x
i

≤10
This objective function represents the total value of the selected items.
<bold>Constraints:</bold>
1. The total weight of the selected items should not exceed 10 kg:

i=1
n

w
i

x
i

≤10
2. Decision variable constraints:
0≤
x
i
≤1
and 
x
i
 must be an integer (binary)
The objective function aims to maximize the total value of selected items, subject to the constraints that the total weight does not exceed 10 kilograms and each item is either selected or not and not repeated.</td>
                           </tr>
                        </tbody>
                     </table>
                     <table-wrap-foot>
                        <fn id="TFN4">
                           <p>Source: The authors.</p>
                        </fn>
                     </table-wrap-foot>
                  </table-wrap>
               </p>
               <p>
                  <table-wrap id="t5">
                     <label>Table 5</label>
                     <caption>
                        <title>Iteration 4 for the application case.</title>
                     </caption>
                     <table>
                        <colgroup>
                           <col/>
                           <col/>
                        </colgroup>
                        <thead>
                           <tr>
                              <th align="justify">Role</th>
                              <th align="justify">Dialogue</th>
                           </tr>
 
                        </thead>
                        <tbody>
                           <tr>
                              <td align="justify">Prompt</td>
                              <td align="justify">Generate a code to solve this problem for Google Colab, Use the following items for the code: Item 1: Value - 8, Weight - 3 kg; Item 2: Value - 6, Weight - 2 kg; Item 3: Value - 10, Weight - 5 kg; Item 4: Value - 3, Weight - 1 kg; Item 5: Value - 7, Weight - 4 kg; Item 6: Value - 9, Weight - 3 kg; Item 7: Value - 5, Weight - 2 kg; Item 8: Value - 4, Weight - 1 kg; Item 9: Value - 2, Weight - 6 kg; Item 10: Value - 10, weight - 10 kg</td>
                           </tr>
 
                           <tr>
                              <td align="justify">GPT</td>
                              <td align="justify"># Import PuLP library
!pip install pulp

from pulp import (
    LpMaximize, LpProblem, lpSum, LpVariable
)
prob = LpProblem("Maximize_Value", LpMaximize)
items = {
    1: {"value": 8, "weight": 3},
    2: {"value": 6, "weight": 2},
    3: {"value": 10, "weight": 5},
    4: {"value": 3, "weight": 1},
    5: {"value": 7, "weight": 4},
    6: {"value": 9, "weight": 3},
    7: {"value": 5, "weight": 2},
    8: {"value": 4, "weight": 1},
    9: {"value": 2, "weight": 6},
    10: {"value": 10, "weight": 10}
}
item_vars = LpVariable.dicts(
    "Item", items.keys(), 0, 1, LpVariable
)
prob += lpSum(
    item_vars[i] * items[i]["value"] 
    for i in items
)
prob += lpSum(
   item_vars[i] * items[i]["weight"] 
   for i in items
) &lt;= 10

prob.solve()

selected_items = [
    i 
    for i in items 
    if item_vars[i].value() == 1
]
total_value = sum(
    items[i]["value"] for i in selected_items
)
total_weight = sum(
    items[i]["weight"] 
    for i in selected_items
)
print("Selected items:")
for i in selected_items:
    print(
       f"Item {i}: "
       f"Value - {items[i]['value']}, "
       f"Weight - {items[i]['weight']} kg"
)
print(f"Total Value: {total_value}")
print(f"Total Weight: {total_weight} kg")</td>
                           </tr>
                        </tbody>
                     </table>
                     <table-wrap-foot>
                        <fn id="TFN5">
                           <p>Source: The authors.</p>
                        </fn>
                     </table-wrap-foot>
                  </table-wrap>
               </p>
               <p>When we ran the code in Google Colab, we obtained this response:</p>
               <p>Requirement already satisfied: pulp in /usr/local/lib/python3.10/dist-packages (2.7.0) </p>
               <p>Selected items: </p>
               <p>Item 1: Value - 8, Weight - 3 kg; </p>
               <p>Item 2: Value - 6, Weight - 2 kg; </p>
               <p>Item 4: Value - 3, Weight - 1 kg; </p>
               <p>Item 6: Value - 9, Weight - 3 kg; </p>
               <p>Item 8: Value - 4, weight - 1 kg; </p>
               <p>Total Value: 30; </p>
               <p>Total Weight: 10 kg</p>
               <p>Which is the correct solution to the problem. So here, we halted the process.</p>
            </sec>
         </sec>
      </sec>
      <sec sec-type="conclusions">
         <title>5. Conclusions</title>
         <bold> </bold>
         <p>Prompt engineering plays a pivotal role in optimizing the performance of LLM by crafting instructions or prompts that elicit precise, accurate, and contextually appropriate responses. Designing effective prompts is iterative and requires clear and concise language to avoid generating vague or biased responses.</p>
         <p>A literature analysis found that multiple methodologies for prompt engineering have been developed. Notably, no methodologies were found specifically designed for engineering. Most of these methodologies offer guidance for prompt design and emphasize iterative processes. Only two methodologies include data within the prompt to facilitate system response retrieval. While some methodologies involve response evaluation before iterating on the prompt, not all of them provide components for this assessment.</p>
         <p>We propose an iterative methodology for optimizing interactions with AI language models in engineering through prompts named GPEI. It is a four-step process, including defining a goal, designing the prompt, evaluating the answer, and iterating to achieve an adequate response. GPEI has two key elements: the inclusion of data in prompt design, making it suitable for applications in the field of engineering, and the inclusion of principles from Explainable AI (XAI) systems to evaluate answers is proposed, promoting transparency and justifiability in the responses generated by LLM.</p>
         <p>Our methodology integrates guidelines, recommendations, and common elements from various methodologies to address issues like hallucinations and low-quality responses. The iterative nature of prompt refinement is emphasized, with suggestions such as asking opposing questions and using specific prompt patterns for improvement. This methodology is a valuable tool for designing prompts in engineering.</p>
         <p>The application example showcased the capabilities of chatGPT in addressing engineering problems when integrated with other calculation tools. Future work stemming from this research is related to applying the methodology in various engineering applications to incorporate the necessary enhancements for improving its utility.</p>
      </sec>
   </body>
   <back>
      <ref-list>
         <title>References</title>
         <ref id="B1">
            <label>1</label>
            <mixed-citation>[]	Lund, B.D., Wang, T., Mannuru, N.R., Nie, B., Shimray, S., and Wang, Z., ChatGPT and a new academic reality: Artificial Intelligence-written research papers and the ethics of the large language models in scholarly publishing, J. Assoc. Inf. Sci. Technol., 74(5), pp. 570-581, 2023. DOI: <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1002/asi.24750">https://doi.org/10.1002/asi.24750</ext-link>.</mixed-citation>
            <element-citation publication-type="journal">
               <person-group person-group-type="author">
                  <name>
                     <surname>Lund</surname>
                     <given-names>B.D.</given-names>
                  </name>
                  <name>
                     <surname>Wang</surname>
                     <given-names>T.</given-names>
                  </name>
                  <name>
                     <surname>Mannuru</surname>
                     <given-names>N.R.</given-names>
                  </name>
                  <name>
                     <surname>Nie</surname>
                     <given-names>B.</given-names>
                  </name>
                  <name>
                     <surname>Shimray</surname>
                     <given-names>S.</given-names>
                  </name>
                  <name>
                     <surname>Wang</surname>
                     <given-names>Z.</given-names>
                  </name>
               </person-group>
               <article-title>ChatGPT and a new academic reality: Artificial Intelligence-written research papers and the ethics of the large language models in scholarly publishing</article-title>
               <source>J. Assoc. Inf. Sci. Technol.</source>
               <volume>74</volume>
               <issue>5</issue>
               <fpage>570</fpage>
               <lpage>581</lpage>
               <year>2023</year>
               <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1002/asi.24750">https://doi.org/10.1002/asi.24750</ext-link>
            </element-citation>
         </ref>
         <ref id="B2">
            <label>2</label>
            <mixed-citation>[]	Macdonald, C., Adeloye, D., Sheikh, A., and Rudan, I., Can ChatGPT draft a research article? An example of population-level vaccine effectiveness analysis, J. Glob. Health, 13, 2023. DOI: <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.7189/JOGH.13.01003">https://doi.org/10.7189/JOGH.13.01003</ext-link>.</mixed-citation>
            <element-citation publication-type="journal">
               <person-group person-group-type="author">
                  <name>
                     <surname>Macdonald</surname>
                     <given-names>C.</given-names>
                  </name>
                  <name>
                     <surname>Adeloye</surname>
                     <given-names>D.</given-names>
                  </name>
                  <name>
                     <surname>Sheikh</surname>
                     <given-names>A.</given-names>
                  </name>
                  <name>
                     <surname>Rudan</surname>
                     <given-names>I</given-names>
                  </name>
               </person-group>
               <article-title>Can ChatGPT draft a research article? An example of population-level vaccine effectiveness analysis</article-title>
               <source>J. Glob. Health</source>
               <volume>13</volume>
               <year>2023</year>
               <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.7189/JOGH.13.01003">https://doi.org/10.7189/JOGH.13.01003</ext-link>
            </element-citation>
         </ref>
         <ref id="B3">
            <label>3</label>
            <mixed-citation>[]	Rudolph, J., Tan, S., and Tan, S., ChatGPT: bullshit spewer or the end of traditional assessments in higher education? J. Appl. Learn. Teach., 6(1), pp. 342-363, 2023. DOI: <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.37074/jalt.2023.6.1.9">https://doi.org/10.37074/jalt.2023.6.1.9</ext-link>.</mixed-citation>
            <element-citation publication-type="journal">
               <person-group person-group-type="author">
                  <name>
                     <surname>Rudolph</surname>
                     <given-names>J.</given-names>
                  </name>
                  <name>
                     <surname>Tan</surname>
                     <given-names>S.</given-names>
                  </name>
                  <name>
                     <surname>Tan</surname>
                     <given-names>S.</given-names>
                  </name>
               </person-group>
               <article-title>ChatGPT: bullshit spewer or the end of traditional assessments in higher education?</article-title>
               <source>J. Appl. Learn. Teach.</source>
               <volume>6</volume>
               <issue>1</issue>
               <fpage>342</fpage>
               <lpage>363</lpage>
               <year>2023</year>
               <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.37074/jalt.2023.6.1.9">https://doi.org/10.37074/jalt.2023.6.1.9</ext-link>
            </element-citation>
         </ref>
         <ref id="B4">
            <label>4</label>
            <mixed-citation>[]	Pavlik, J.V., Collaborating with ChatGPT: considering the implications of generative artificial intelligence for journalism and media education, Journal. Mass Commun. Educ., 78(1), pp. 84-93, 2023. DOI: <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1177/10776958221149577">https://doi.org/10.1177/10776958221149577</ext-link>.</mixed-citation>
            <element-citation publication-type="journal">
               <person-group person-group-type="author">
                  <name>
                     <surname>Pavlik</surname>
                     <given-names>J.V.</given-names>
                  </name>
               </person-group>
               <article-title>Collaborating with ChatGPT: considering the implications of generative artificial intelligence for journalism and media education, Journal</article-title>
               <source>Mass Commun. Educ.</source>
               <volume>78</volume>
               <issue>1</issue>
               <fpage>84</fpage>
               <lpage>93</lpage>
               <year>2023</year>
               <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1177/10776958221149577">https://doi.org/10.1177/10776958221149577</ext-link>
            </element-citation>
         </ref>
         <ref id="B5">
            <label>5</label>
            <mixed-citation>[]	Jha, S., Jha, S.K., Lincoln, P., Bastian, N.D., Velasquez, A., and Neema, S., Dehallucinating large language models using formal methods guided iterative prompting, in:  2023 IEEE International Conference on Assured Autonomy (ICAA), IEEE, 2023. pp. 149-152.</mixed-citation>
            <element-citation publication-type="confproc">
               <person-group person-group-type="author">
                  <name>
                     <surname>Jha</surname>
                     <given-names>S.</given-names>
                  </name>
                  <name>
                     <surname>Jha</surname>
                     <given-names>S.K.</given-names>
                  </name>
                  <name>
                     <surname>Lincoln</surname>
                     <given-names>P.</given-names>
                  </name>
                  <name>
                     <surname>Bastian</surname>
                     <given-names>N.D.</given-names>
                  </name>
                  <name>
                     <surname>Velasquez</surname>
                     <given-names>A.</given-names>
                  </name>
                  <name>
                     <surname>Neema</surname>
                     <given-names>S.</given-names>
                  </name>
               </person-group>
               <source>Dehallucinating large language models using formal methods guided iterative prompting</source>
               <conf-name>2023 IEEE International Conference on Assured Autonomy (ICAA)</conf-name>
               <conf-sponsor>IEEE</conf-sponsor>
               <conf-date>2023</conf-date>
               <fpage>149</fpage>
               <lpage>152</lpage>
            </element-citation>
         </ref>
         <ref id="B6">
            <label>6</label>
            <mixed-citation>[]	Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., and Neubig, G., Pre-train, prompt, and predict: a systematic survey of prompting methods in natural language processing, ACM Comput. Surv., 55(9), pp. 1-35, 2023.</mixed-citation>
            <element-citation publication-type="journal">
               <person-group person-group-type="author">
                  <name>
                     <surname>Liu</surname>
                     <given-names>P.</given-names>
                  </name>
                  <name>
                     <surname>Yuan</surname>
                     <given-names>W.</given-names>
                  </name>
                  <name>
                     <surname>Fu</surname>
                     <given-names>J.</given-names>
                  </name>
                  <name>
                     <surname>Jiang</surname>
                     <given-names>Z.</given-names>
                  </name>
                  <name>
                     <surname>Hayashi</surname>
                     <given-names>H.</given-names>
                  </name>
                  <name>
                     <surname>Neubig</surname>
                     <given-names>G.</given-names>
                  </name>
               </person-group>
               <article-title>Pre-train, prompt, and predict: a systematic survey of prompting methods in natural language processing</article-title>
               <source>ACM Comput. Surv.</source>
               <volume>55</volume>
               <issue>9</issue>
               <fpage>1</fpage>
               <lpage>35</lpage>
               <year>2023</year>
            </element-citation>
         </ref>
         <ref id="B7">
            <label>7</label>
            <mixed-citation>[]	Lo, L.S., The art and science of prompt engineering: a new literacy in the information age, Internet Ref. Serv. Q., 2023. DOI: <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1080/10875301.2023.2227621">https://doi.org/10.1080/10875301.2023.2227621</ext-link>.</mixed-citation>
            <element-citation publication-type="book">
               <person-group person-group-type="author">
                  <name>
                     <surname>Lo</surname>
                     <given-names>L.S.</given-names>
                  </name>
               </person-group>
               <source>The art and science of prompt engineering: a new literacy in the information age</source>
               <publisher-name>Ref. Serv. Q.</publisher-name>
               <year>2023</year>
               <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1080/10875301.2023.2227621">https://doi.org/10.1080/10875301.2023.2227621</ext-link>
            </element-citation>
         </ref>
         <ref id="B8">
            <label>8</label>
            <mixed-citation>[]	Giray, L., Prompt engineering with ChatGPT: a guide for academic writers, Ann. Biomed. Eng., 2023. DOI: <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1007/s10439-023-03272-4">https://doi.org/10.1007/s10439-023-03272-4</ext-link>.</mixed-citation>
            <element-citation publication-type="journal">
               <person-group person-group-type="author">
                  <name>
                     <surname>Giray</surname>
                     <given-names>L.</given-names>
                  </name>
               </person-group>
               <article-title>Prompt engineering with ChatGPT: a guide for academic writers</article-title>
               <source>Ann. Biomed. Eng.</source>
               <year>2023</year>
               <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1007/s10439-023-03272-4">https://doi.org/10.1007/s10439-023-03272-4</ext-link>
            </element-citation>
         </ref>
         <ref id="B9">
            <label>9</label>
            <mixed-citation>[]	White, J. et al., A prompt pattern catalog to enhance prompt engineering with chatgpt, ArXiv Prepr. ArXiv230211382, 2023.</mixed-citation>
            <element-citation publication-type="book">
               <person-group person-group-type="author">
                  <name>
                     <surname>White</surname>
                     <given-names>J.</given-names>
                  </name>
                  <etal/>
               </person-group>
               <source>A prompt pattern catalog to enhance prompt engineering with chatgpt</source>
               <comment>ArXiv Prepr. ArXiv230211382</comment>
               <year>2023</year>
            </element-citation>
         </ref>
         <ref id="B10">
            <label>10</label>
            <mixed-citation>[]	Busch, K., Rochlitzer, A., Sola, D., and Leopold, H., Just tell me: prompt engineering in business process management, in: International Conference on Business Process Modeling, Development and Support, Springer, 2023, pp. 3-11.</mixed-citation>
            <element-citation publication-type="confproc">
               <person-group person-group-type="author">
                  <name>
                     <surname>Busch</surname>
                     <given-names>K.</given-names>
                  </name>
                  <name>
                     <surname>Rochlitzer</surname>
                     <given-names>A.</given-names>
                  </name>
                  <name>
                     <surname>Sola</surname>
                     <given-names>D.</given-names>
                  </name>
                  <name>
                     <surname>Leopold</surname>
                     <given-names>H.</given-names>
                  </name>
               </person-group>
               <source>Just tell me: prompt engineering in business process management</source>
               <conf-name>International Conference on Business Process Modeling, Development and Support</conf-name>
               <conf-date>2023</conf-date>
               <fpage>3</fpage>
               <lpage>11</lpage>
            </element-citation>
         </ref>
         <ref id="B11">
            <label>11</label>
            <mixed-citation>[]	Lecler, A., Duron, L., and Soander, P., Revolutionizing radiologand with GPT-based models: current applications, future possibilities and limitations of ChatGPT, Diagn. Interv. Imaging, 104(6), pp. 269-274, 2023. DOI: <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.diii.2023.02.003">https://doi.org/10.1016/j.diii.2023.02.003</ext-link>.</mixed-citation>
            <element-citation publication-type="journal">
               <person-group person-group-type="author">
                  <name>
                     <surname>Lecler</surname>
                     <given-names>A.</given-names>
                  </name>
                  <name>
                     <surname>Duron</surname>
                     <given-names>L.</given-names>
                  </name>
                  <name>
                     <surname>Soander</surname>
                     <given-names>P.</given-names>
                  </name>
               </person-group>
               <article-title>Revolutionizing radiologand with GPT-based models: current applications, future possibilities and limitations of ChatGPT, Diagn</article-title>
               <source>Interv. Imaging</source>
               <volume>104</volume>
               <issue>6</issue>
               <fpage>269</fpage>
               <lpage>274</lpage>
               <year>2023</year>
               <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.diii.2023.02.003">https://doi.org/10.1016/j.diii.2023.02.003</ext-link>
            </element-citation>
         </ref>
         <ref id="B12">
            <label>12</label>
            <mixed-citation>[]	Eandsenbach, G., The role of ChatGPT, generative language models, and artificial intelligence in medical education: a conversation with ChatGPT and a call for papers, JMIR Med. Educ., 9, 2023. DOI: <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.2196/46885">https://doi.org/10.2196/46885</ext-link>.</mixed-citation>
            <element-citation publication-type="journal">
               <person-group person-group-type="author">
                  <name>
                     <surname>Eandsenbach</surname>
                     <given-names>G.</given-names>
                  </name>
               </person-group>
               <article-title>The role of ChatGPT, generative language models, and artificial intelligence in medical education: a conversation with ChatGPT and a call for papers</article-title>
               <source>JMIR Med. Educ.</source>
               <volume>9</volume>
               <year>2023</year>
               <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.2196/46885">https://doi.org/10.2196/46885</ext-link>
            </element-citation>
         </ref>
         <ref id="B13">
            <label>13</label>
            <mixed-citation>[]	Wu, T., Terrand, M., and Cai, C.J., AI chains: transparent and controllable Human-AI interaction band chaining large language model prompts, in: Conference on Human Factors in Computing Sandstems - Proceedings, 2022. DOI: <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1145/3491102.3517582">https://doi.org/10.1145/3491102.3517582</ext-link>.</mixed-citation>
            <element-citation publication-type="confproc">
               <person-group person-group-type="author">
                  <name>
                     <surname>Wu</surname>
                     <given-names>T.</given-names>
                  </name>
                  <name>
                     <surname>Terrand</surname>
                     <given-names>M.</given-names>
                  </name>
                  <name>
                     <surname>Cai</surname>
                     <given-names>C.J.</given-names>
                  </name>
               </person-group>
               <source>AI chains: transparent and controllable Human-AI interaction band chaining large language model prompts</source>
               <conf-name>Conference on Human Factors in Computing Sandstems - Proceedings</conf-name>
               <conf-date>2022</conf-date>
               <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1145/3491102.3517582">https://doi.org/10.1145/3491102.3517582</ext-link>
            </element-citation>
         </ref>
         <ref id="B14">
            <label>14</label>
            <mixed-citation>[]	Raand P.P., and Majumder, P., Assessing the Accuracand of responses band the language model ChatGPT to questions regarding bariatric surgerand: a critical appraisal, Obes. Surg., 33(8), pp. 2588-2589, 2023. DOI: <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1007/s11695-023-06664-6">https://doi.org/10.1007/s11695-023-06664-6</ext-link>.</mixed-citation>
            <element-citation publication-type="journal">
               <person-group person-group-type="author">
                  <name>
                     <surname>Raand</surname>
                     <given-names>P.P.</given-names>
                  </name>
                  <name>
                     <surname>Majumder</surname>
                     <given-names>P</given-names>
                  </name>
               </person-group>
               <article-title>Assessing the Accuracand of responses band the language model ChatGPT to questions regarding bariatric surgerand: a critical appraisal</article-title>
               <source>Obes. Surg.</source>
               <volume>33</volume>
               <issue>8</issue>
               <fpage>2588</fpage>
               <lpage>2589</lpage>
               <year>2023</year>
               <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1007/s11695-023-06664-6">https://doi.org/10.1007/s11695-023-06664-6</ext-link>
            </element-citation>
         </ref>
         <ref id="B15">
            <label>15</label>
            <mixed-citation>[]	Gupta, R., Herzog, I., Weisberger, J., Chao, J., Chaiandasate, K., and Lee, E.S., Utilization of ChatGPT for plastic surgerand research: friend or foe?, J. Plast. Reconstr. Aesthet. Surg., 80, pp. 145-147, 2023. DOI: <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.bjps.2023.03.004">https://doi.org/10.1016/j.bjps.2023.03.004</ext-link>.</mixed-citation>
            <element-citation publication-type="journal">
               <person-group person-group-type="author">
                  <name>
                     <surname>Gupta</surname>
                     <given-names>R.</given-names>
                  </name>
                  <name>
                     <surname>Herzog</surname>
                     <given-names>I.</given-names>
                  </name>
                  <name>
                     <surname>Weisberger</surname>
                     <given-names>J.</given-names>
                  </name>
                  <name>
                     <surname>Chao</surname>
                     <given-names>J.</given-names>
                  </name>
                  <name>
                     <surname>Chaiandasate</surname>
                     <given-names>K.</given-names>
                  </name>
                  <name>
                     <surname>Lee</surname>
                     <given-names>E.S.</given-names>
                  </name>
               </person-group>
               <article-title>Utilization of ChatGPT for plastic surgerand research: friend or foe?</article-title>
               <source>J. Plast. Reconstr. Aesthet. Surg.</source>
               <volume>80</volume>
               <fpage>145</fpage>
               <lpage>147</lpage>
               <year>2023</year>
               <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.bjps.2023.03.004">https://doi.org/10.1016/j.bjps.2023.03.004</ext-link>
            </element-citation>
         </ref>
         <ref id="B16">
            <label>16</label>
            <mixed-citation>[]	Deiana, G., Dettori, M., Arghittu, A., Azara, A., Gabutti, G., and Castiglia, P., Artificial intelligence and public health: evaluating ChatGPT responses to vaccination mandths and misconceptions, Vaccines, 11(7), art. 11071217, 2023. DOI: <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3390/vaccines11071217">https://doi.org/10.3390/vaccines11071217</ext-link>.</mixed-citation>
            <element-citation publication-type="journal">
               <person-group person-group-type="author">
                  <name>
                     <surname>Deiana</surname>
                     <given-names>G.</given-names>
                  </name>
                  <name>
                     <surname>Dettori</surname>
                     <given-names>M.</given-names>
                  </name>
                  <name>
                     <surname>Arghittu</surname>
                     <given-names>A.</given-names>
                  </name>
                  <name>
                     <surname>Azara</surname>
                     <given-names>A.</given-names>
                  </name>
                  <name>
                     <surname>Gabutti</surname>
                     <given-names>G.</given-names>
                  </name>
                  <name>
                     <surname>Castiglia</surname>
                     <given-names>P.</given-names>
                  </name>
               </person-group>
               <article-title>Artificial intelligence and public health: evaluating ChatGPT responses to vaccination mandths and misconceptions</article-title>
               <source>Vaccines</source>
               <volume>11</volume>
               <issue>7</issue>
               <comment>art. 11071217</comment>
               <year>2023</year>
               <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3390/vaccines11071217">https://doi.org/10.3390/vaccines11071217</ext-link>
            </element-citation>
         </ref>
         <ref id="B17">
            <label>17</label>
            <mixed-citation>[]	Corsello, A. and Santangelo, A., Maand artificial intelligence influence future pediatric research?-The case of ChatGPT, children, 10(4), 2023. DOI: <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3390/children10040757">https://doi.org/10.3390/children10040757</ext-link>.</mixed-citation>
            <element-citation publication-type="book">
               <person-group person-group-type="author">
                  <name>
                     <surname>Corsello</surname>
                     <given-names>A.</given-names>
                  </name>
                  <name>
                     <surname>Santangelo</surname>
                     <given-names>A.</given-names>
                  </name>
               </person-group>
               <source>Maand artificial intelligence influence future pediatric research?-The case of ChatGPT, children</source>
               <volume>10</volume>
               <issue>4</issue>
               <year>2023</year>
               <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3390/children10040757">https://doi.org/10.3390/children10040757</ext-link>
            </element-citation>
         </ref>
         <ref id="B18">
            <label>18</label>
            <mixed-citation>[]	Eager, B., and Brunton, R., Prompting higher education towards AI-Augmented teaching and learning practice, J. Univ. Teach. Learn. Pract., 20(5), 2023. DOI: <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.53761/1.20.5.02">https://doi.org/10.53761/1.20.5.02</ext-link>.</mixed-citation>
            <element-citation publication-type="journal">
               <person-group person-group-type="author">
                  <name>
                     <surname>Eager</surname>
                     <given-names>B.</given-names>
                  </name>
                  <name>
                     <surname>Brunton</surname>
                     <given-names>R.</given-names>
                  </name>
               </person-group>
               <article-title>Prompting higher education towards AI-Augmented teaching and learning practice</article-title>
               <source>J. Univ. Teach. Learn. Pract.</source>
               <volume>20</volume>
               <issue>5</issue>
               <year>2023</year>
               <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.53761/1.20.5.02">https://doi.org/10.53761/1.20.5.02</ext-link>
            </element-citation>
         </ref>
         <ref id="B19">
            <label>19</label>
            <mixed-citation>[]	Lo, L.S., The CLEAR path: a framework for enhancing information literacand through prompt engineering, J. Acad. Librariansh., 49(4), 2023. DOI: <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.acalib.2023.102720">https://doi.org/10.1016/j.acalib.2023.102720</ext-link>.</mixed-citation>
            <element-citation publication-type="journal">
               <person-group person-group-type="author">
                  <name>
                     <surname>Lo</surname>
                     <given-names>L.S.</given-names>
                  </name>
               </person-group>
               <article-title>The CLEAR path: a framework for enhancing information literacand through prompt engineering</article-title>
               <source>J. Acad. Librariansh.</source>
               <volume>49</volume>
               <issue>4</issue>
               <year>2023</year>
               <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.acalib.2023.102720">https://doi.org/10.1016/j.acalib.2023.102720</ext-link>
            </element-citation>
         </ref>
         <ref id="B20">
            <label>20</label>
            <mixed-citation>[]	DK. Dwivedi, et al., So what if ChatGPT wrote it? Multidisciplinarand perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policand, Int. J. Inf. Manag., 71, 2023. DOI: <ext-link ext-link-type="uri"
                         xlink:href="https://doi.org/10.1016/j.ijinfomgt.2023.102642">https://doi.org/10.1016/j.ijinfomgt.2023.102642</ext-link>.</mixed-citation>
            <element-citation publication-type="journal">
               <person-group person-group-type="author">
                  <name>
                     <surname>Dwivedi</surname>
                     <given-names>DK.</given-names>
                  </name>
                  <etal/>
               </person-group>
               <article-title>So what if ChatGPT wrote it? Multidisciplinarand perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policand</article-title>
               <source>Int. J. Inf. Manag.</source>
               <volume>71</volume>
               <year>2023</year>
               <ext-link ext-link-type="uri"
                         xlink:href="https://doi.org/10.1016/j.ijinfomgt.2023.102642">https://doi.org/10.1016/j.ijinfomgt.2023.102642</ext-link>
            </element-citation>
         </ref>
         <ref id="B21">
            <label>21</label>
            <mixed-citation>[]	Harrer, S., Attention is not all andou need: the complicated case of ethicalland using large language models in healthcare and medicine, eBioMedicine, 90, 2023. DOI: <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.ebiom.2023.104512">https://doi.org/10.1016/j.ebiom.2023.104512</ext-link>.</mixed-citation>
            <element-citation publication-type="journal">
               <person-group person-group-type="author">
                  <name>
                     <surname>Harrer</surname>
                     <given-names>S.</given-names>
                  </name>
               </person-group>
               <article-title>Attention is not all andou need: the complicated case of ethicalland using large language models in healthcare and medicine</article-title>
               <source>eBioMedicine</source>
               <volume>90</volume>
               <year>2023</year>
               <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.ebiom.2023.104512">https://doi.org/10.1016/j.ebiom.2023.104512</ext-link>
            </element-citation>
         </ref>
         <ref id="B22">
            <label>22</label>
            <mixed-citation>[]	Crawford, J., Cowling, M., and Allen, K.-A., Leadership is needed for ethical ChatGPT: character, assessment, and learning using artificial intelligence (AI), J. Univ. Teach. Learn. Pract., 20(3), 2023. DOI: <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.53761/1.20.3.02">https://doi.org/10.53761/1.20.3.02</ext-link>.</mixed-citation>
            <element-citation publication-type="journal">
               <person-group person-group-type="author">
                  <name>
                     <surname>Crawford</surname>
                     <given-names>J.</given-names>
                  </name>
                  <name>
                     <surname>Cowling</surname>
                     <given-names>M.</given-names>
                  </name>
                  <name>
                     <surname>Allen</surname>
                     <given-names>K.-A.</given-names>
                  </name>
               </person-group>
               <article-title>Leadership is needed for ethical ChatGPT: character, assessment, and learning using artificial intelligence (AI)</article-title>
               <source>J. Univ. Teach. Learn. Pract.</source>
               <volume>20</volume>
               <issue>3</issue>
               <year>2023</year>
               <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.53761/1.20.3.02">https://doi.org/10.53761/1.20.3.02</ext-link>
            </element-citation>
         </ref>
         <ref id="B23">
            <label>23</label>
            <mixed-citation>[]	Alexander, K., Savvidou, C., and Alexander, C., Who wrote this essaand? Detecting ai-generated writing in second language education in higher education, Teach. Engl. Technol., 23(2), pp. 25-43, 2023. DOI: <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.56297/BUKA4060/XHLD5365">https://doi.org/10.56297/BUKA4060/XHLD5365</ext-link>.</mixed-citation>
            <element-citation publication-type="journal">
               <person-group person-group-type="author">
                  <name>
                     <surname>Alexander</surname>
                     <given-names>K.</given-names>
                  </name>
                  <name>
                     <surname>Savvidou</surname>
                     <given-names>C.</given-names>
                  </name>
                  <name>
                     <surname>Alexander</surname>
                     <given-names>C.</given-names>
                  </name>
               </person-group>
               <article-title>Who wrote this essaand? Detecting ai-generated writing in second language education in higher education</article-title>
               <source>Teach. Engl. Technol.</source>
               <volume>23</volume>
               <issue>2</issue>
               <fpage>25</fpage>
               <lpage>43</lpage>
               <year>2023</year>
               <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.56297/BUKA4060/XHLD5365">https://doi.org/10.56297/BUKA4060/XHLD5365</ext-link>
            </element-citation>
         </ref>
         <ref id="B24">
            <label>24</label>
            <mixed-citation>[]	Lim, W.M., Gunasekara, A., Pallant, J.L., Pallant, J.I., and Pechenkina, E., Generative AI and the future of education: Ragnarök or reformation? A paradoxical perspective from management educators, Int. J. Manag. Educ., 21(2), 2023, DOI: <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.ijme.2023.100790">https://doi.org/10.1016/j.ijme.2023.100790</ext-link>.</mixed-citation>
            <element-citation publication-type="journal">
               <person-group person-group-type="author">
                  <name>
                     <surname>Lim</surname>
                     <given-names>W.M.</given-names>
                  </name>
                  <name>
                     <surname>Gunasekara</surname>
                     <given-names>A.</given-names>
                  </name>
                  <name>
                     <surname>Pallant</surname>
                     <given-names>J.L.</given-names>
                  </name>
                  <name>
                     <surname>Pallant</surname>
                     <given-names>J.I.</given-names>
                  </name>
                  <name>
                     <surname>Pechenkina</surname>
                     <given-names>E.</given-names>
                  </name>
               </person-group>
               <article-title>Generative AI and the future of education: Ragnarök or reformation? A paradoxical perspective from management educators</article-title>
               <source>Int. J. Manag. Educ.</source>
               <volume>21</volume>
               <issue>2</issue>
               <year>2023</year>
               <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.ijme.2023.100790">https://doi.org/10.1016/j.ijme.2023.100790</ext-link>
            </element-citation>
         </ref>
         <ref id="B25">
            <label>25</label>
            <mixed-citation>[]	Bilal, M., Jamil, Y., Rana, D., and Shah, H.H., Enhancing awareness and Self-diagnosis of obstructive sleep apnea using AI-Powered chatbots: the role of ChatGPT in revolutionizing healthcare, Ann. Biomed. Eng., 2023. DOI: <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1007/s10439-023-03298-8">https://doi.org/10.1007/s10439-023-03298-8</ext-link>.</mixed-citation>
            <element-citation publication-type="journal">
               <person-group person-group-type="author">
                  <name>
                     <surname>Bilal</surname>
                     <given-names>M.</given-names>
                  </name>
                  <name>
                     <surname>Jamil</surname>
                     <given-names>Y.</given-names>
                  </name>
                  <name>
                     <surname>Rana</surname>
                     <given-names>D.</given-names>
                  </name>
                  <name>
                     <surname>Shah</surname>
                     <given-names>H.H.</given-names>
                  </name>
               </person-group>
               <article-title>Enhancing awareness and Self-diagnosis of obstructive sleep apnea using AI-Powered chatbots: the role of ChatGPT in revolutionizing healthcare</article-title>
               <source>Ann. Biomed. Eng.</source>
               <year>2023</year>
               <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1007/s10439-023-03298-8">https://doi.org/10.1007/s10439-023-03298-8</ext-link>
            </element-citation>
         </ref>
         <ref id="B26">
            <label>26</label>
            <mixed-citation>[]	Epstein, R.H., and Dexter, F., Variabilitand in large language Models' responses to medical licensing and certification examinations. comment on "How Does ChatGPT Perform on the United States Medical Licensing Examination? The Implications of Large Language Models for Medical Education and Knowledge Assessment", JMIR Med. Educ., 9, 2023. DOI: <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.2196/48305">https://doi.org/10.2196/48305</ext-link>.</mixed-citation>
            <element-citation publication-type="journal">
               <person-group person-group-type="author">
                  <name>
                     <surname>Epstein</surname>
                     <given-names>R.H.</given-names>
                  </name>
                  <name>
                     <surname>Dexter</surname>
                     <given-names>F.</given-names>
                  </name>
               </person-group>
               <article-title>Variabilitand in large language Models' responses to medical licensing and certification examinations. comment on "How Does ChatGPT Perform on the United States Medical Licensing Examination? The Implications of Large Language Models for Medical Education and Knowledge Assessment"</article-title>
               <source>JMIR Med. Educ.</source>
               <volume>9</volume>
               <year>2023</year>
               <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.2196/48305">https://doi.org/10.2196/48305</ext-link>
            </element-citation>
         </ref>
         <ref id="B27">
            <label>27</label>
            <mixed-citation>[]	Marchandot, B., Matsushita, K., Carmona, A., Trimaille, A., and Morel, O., ChatGPT: the next frontier in academic writing for cardiologists or a pandora's box of ethical dilemmas, Eur. Heart J. Open, 3(2), 2023. DOI: <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1093/ehjopen/oead007">https://doi.org/10.1093/ehjopen/oead007</ext-link>.</mixed-citation>
            <element-citation publication-type="journal">
               <person-group person-group-type="author">
                  <name>
                     <surname>Marchandot</surname>
                     <given-names>B.</given-names>
                  </name>
                  <name>
                     <surname>Matsushita</surname>
                     <given-names>K.</given-names>
                  </name>
                  <name>
                     <surname>Carmona</surname>
                     <given-names>A.</given-names>
                  </name>
                  <name>
                     <surname>Trimaille</surname>
                     <given-names>A.</given-names>
                  </name>
                  <name>
                     <surname>Morel</surname>
                     <given-names>O.</given-names>
                  </name>
               </person-group>
               <article-title>ChatGPT: the next frontier in academic writing for cardiologists or a pandora's box of ethical dilemmas</article-title>
               <source>Eur. Heart J. Open</source>
               <volume>3</volume>
               <issue>2</issue>
               <year>2023</year>
               <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1093/ehjopen/oead007">https://doi.org/10.1093/ehjopen/oead007</ext-link>
            </element-citation>
         </ref>
         <ref id="B28">
            <label>28</label>
            <mixed-citation>[]	Graf, A. and Bernardi, R.E., ChatGPT in research: balancing ethics, transparencand and advancement, Neuroscience, 515, pp. 71-73, 2023. DOI: <ext-link ext-link-type="uri"
                         xlink:href="https://doi.org/10.1016/j.neuroscience.2023.02.008">https://doi.org/10.1016/j.neuroscience.2023.02.008</ext-link>.</mixed-citation>
            <element-citation publication-type="journal">
               <person-group person-group-type="author">
                  <name>
                     <surname>Graf</surname>
                     <given-names>A.</given-names>
                  </name>
                  <name>
                     <surname>Bernardi</surname>
                     <given-names>R.E.</given-names>
                  </name>
               </person-group>
               <article-title>ChatGPT in research: balancing ethics, transparencand and advancement</article-title>
               <source>Neuroscience</source>
               <volume>515</volume>
               <fpage>71</fpage>
               <lpage>73</lpage>
               <year>2023</year>
               <ext-link ext-link-type="uri"
                         xlink:href="https://doi.org/10.1016/j.neuroscience.2023.02.008">https://doi.org/10.1016/j.neuroscience.2023.02.008</ext-link>
            </element-citation>
         </ref>
         <ref id="B29">
            <label>29</label>
            <mixed-citation>[]	Yan, D., Impact of ChatGPT on learners in a L2 writing practicum: an exploratorand investigation, Educ. Inf. Technol., 2023. DOI: <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1007/s10639-023-11742-4">https://doi.org/10.1007/s10639-023-11742-4</ext-link>.</mixed-citation>
            <element-citation publication-type="journal">
               <person-group person-group-type="author">
                  <name>
                     <surname>Yan</surname>
                     <given-names>D.</given-names>
                  </name>
               </person-group>
               <article-title>Impact of ChatGPT on learners in a L2 writing practicum: an exploratorand investigation</article-title>
               <source>Educ. Inf. Technol.</source>
               <year>2023</year>
               <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1007/s10639-023-11742-4">https://doi.org/10.1007/s10639-023-11742-4</ext-link>
            </element-citation>
         </ref>
         <ref id="B30">
            <label>30</label>
            <mixed-citation>[]	Ruksakulpiwat, S., Kumar, A., and Ajibade, A., Using ChatGPT in Medical research: current status and future directions, J. Multidiscip. Healthc., 16, pp. 1513-1520, 2023. DOI: <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.2147/JMDH.S413470">https://doi.org/10.2147/JMDH.S413470</ext-link>.</mixed-citation>
            <element-citation publication-type="journal">
               <person-group person-group-type="author">
                  <name>
                     <surname>Ruksakulpiwat</surname>
                     <given-names>S.</given-names>
                  </name>
                  <name>
                     <surname>Kumar</surname>
                     <given-names>A.</given-names>
                  </name>
                  <name>
                     <surname>Ajibade</surname>
                     <given-names>A.</given-names>
                  </name>
               </person-group>
               <article-title>Using ChatGPT in Medical research: current status and future directions</article-title>
               <source>J. Multidiscip. Healthc.</source>
               <volume>16</volume>
               <fpage>1513</fpage>
               <lpage>1520</lpage>
               <year>2023</year>
               <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.2147/JMDH.S413470">https://doi.org/10.2147/JMDH.S413470</ext-link>
            </element-citation>
         </ref>
         <ref id="B31">
            <label>31</label>
            <mixed-citation>[]	Májovský, M., Černý, M., Kasal, M., Komarc, M., and Netuka, D., Artificial intelligence can generate fraudulent but authentic-looking scientific medical articles: pandora's box has been opened, J. Med. Internet Res., 25, 2023. DOI: <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.2196/46924">https://doi.org/10.2196/46924</ext-link>.</mixed-citation>
            <element-citation publication-type="journal">
               <person-group person-group-type="author">
                  <name>
                     <surname>Májovský</surname>
                     <given-names>M.</given-names>
                  </name>
                  <name>
                     <surname>Černý</surname>
                     <given-names>M.</given-names>
                  </name>
                  <name>
                     <surname>Kasal</surname>
                     <given-names>M.</given-names>
                  </name>
                  <name>
                     <surname>Komarc</surname>
                     <given-names>M.</given-names>
                  </name>
                  <name>
                     <surname>Netuka</surname>
                     <given-names>D.</given-names>
                  </name>
               </person-group>
               <article-title>Artificial intelligence can generate fraudulent but authentic-looking scientific medical articles: pandora's box has been opened</article-title>
               <source>J. Med. Internet Res.</source>
               <volume>25</volume>
               <year>2023</year>
               <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.2196/46924">https://doi.org/10.2196/46924</ext-link>
            </element-citation>
         </ref>
         <ref id="B32">
            <label>32</label>
            <mixed-citation>[]	Spirling, A., Whand open-source generative AI models are an ethical waand forward for science, nature, 616(7957), art. 413, 2023. DOI: <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1038/d41586-023-01295-4">https://doi.org/10.1038/d41586-023-01295-4</ext-link>.</mixed-citation>
            <element-citation publication-type="journal">
               <person-group person-group-type="author">
                  <name>
                     <surname>Spirling</surname>
                     <given-names>A.</given-names>
                  </name>
               </person-group>
               <article-title>Whand open-source generative AI models are an ethical waand forward for science</article-title>
               <source>nature</source>
               <volume>616</volume>
               <issue>7957</issue>
               <comment>art. 413</comment>
               <year>2023</year>
               <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1038/d41586-023-01295-4">https://doi.org/10.1038/d41586-023-01295-4</ext-link>
            </element-citation>
         </ref>
         <ref id="B33">
            <label>33</label>
            <mixed-citation>[]	Wang, S.H., OpenAI - explain whand some countries are excluded from ChatGPT, nature, 615(7950), art. 34, 2023. DOI: <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1038/d41586-023-00553-9">https://doi.org/10.1038/d41586-023-00553-9</ext-link>.</mixed-citation>
            <element-citation publication-type="journal">
               <person-group person-group-type="author">
                  <name>
                     <surname>Wang</surname>
                     <given-names>S.H.</given-names>
                  </name>
               </person-group>
               <article-title>OpenAI - explain whand some countries are excluded from ChatGPT</article-title>
               <source>nature</source>
               <volume>615</volume>
               <issue>7950</issue>
               <comment>art. 34</comment>
               <year>2023</year>
               <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1038/d41586-023-00553-9">https://doi.org/10.1038/d41586-023-00553-9</ext-link>
            </element-citation>
         </ref>
         <ref id="B34">
            <label>34</label>
            <mixed-citation>[]	Cooper, G., Examining science education in ChatGPT: An exploratorand studand of generative artificial intelligence, J. Sci. Educ. Technol., 32(3), pp. 444-452, 2023. DOI: <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1007/s10956-023-10039-and">https://doi.org/10.1007/s10956-023-10039-and</ext-link>.</mixed-citation>
            <element-citation publication-type="journal">
               <person-group person-group-type="author">
                  <name>
                     <surname>Cooper</surname>
                     <given-names>G.</given-names>
                  </name>
               </person-group>
               <article-title>Examining science education in ChatGPT: An exploratorand studand of generative artificial intelligence</article-title>
               <source>J. Sci. Educ. Technol.</source>
               <volume>32</volume>
               <issue>3</issue>
               <fpage>444</fpage>
               <lpage>452</lpage>
               <year>2023</year>
               <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1007/s10956-023-10039-and">https://doi.org/10.1007/s10956-023-10039-and</ext-link>
            </element-citation>
         </ref>
         <ref id="B35">
            <label>35</label>
            <mixed-citation>[]	Chang, E.Y., Prompting large language models with the socratic method, in: 2023 IEEE 13<sup>th</sup> Annual Computing and Communication Workshop and Conference, CCWC 2023, 2023. pp. 351-360. DOI: <ext-link ext-link-type="uri"
                         xlink:href="https://doi.org/10.1109/CCWC57344.2023.10099179">https://doi.org/10.1109/CCWC57344.2023.10099179</ext-link>.</mixed-citation>
            <element-citation publication-type="confproc">
               <person-group person-group-type="author">
                  <name>
                     <surname>Chang</surname>
                     <given-names>E.Y.</given-names>
                  </name>
               </person-group>
               <source>Prompting large language models with the socratic method</source>
               <conf-name>2023 IEEE 13th Annual Computing and Communication Workshop and Conference</conf-name>
               <conf-sponsor>CCWC</conf-sponsor>
               <conf-date>2023</conf-date>
               <year>2023</year>
               <fpage>351</fpage>
               <lpage>360</lpage>
               <ext-link ext-link-type="uri"
                         xlink:href="https://doi.org/10.1109/CCWC57344.2023.10099179">https://doi.org/10.1109/CCWC57344.2023.10099179</ext-link>
            </element-citation>
         </ref>
         <ref id="B36">
            <label>36</label>
            <mixed-citation>[]	Heston, T.F., and Khun, C., Prompt engineering in medical education, Int. Med. Educ., 2(3), pp. 198-205, 2023.</mixed-citation>
            <element-citation publication-type="journal">
               <person-group person-group-type="author">
                  <name>
                     <surname>Heston</surname>
                     <given-names>T.F.</given-names>
                  </name>
                  <name>
                     <surname>Khun</surname>
                     <given-names>C.</given-names>
                  </name>
               </person-group>
               <article-title>Prompt engineering in medical education</article-title>
               <source>Int. Med. Educ.</source>
               <volume>2</volume>
               <issue>3</issue>
               <fpage>198</fpage>
               <lpage>205</lpage>
               <year>2023</year>
            </element-citation>
         </ref>
         <ref id="B37">
            <label>37</label>
            <mixed-citation>[]	Yong, G., Jeon, K., Gil, D., and Lee, G., Prompt engineering for zero-shot and few-shot defect detection and classification using a visual-language pretrained model, Comput.-Aided Civ. Infrastruct. Eng., 38(11), pp. 1536-1554, 2023. DOI: <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1111/mice.12954">https://doi.org/10.1111/mice.12954</ext-link>.</mixed-citation>
            <element-citation publication-type="journal">
               <person-group person-group-type="author">
                  <name>
                     <surname>Yong</surname>
                     <given-names>G.</given-names>
                  </name>
                  <name>
                     <surname>Jeon</surname>
                     <given-names>K.</given-names>
                  </name>
                  <name>
                     <surname>Gil</surname>
                     <given-names>D.</given-names>
                  </name>
                  <name>
                     <surname>Lee</surname>
                     <given-names>G.</given-names>
                  </name>
               </person-group>
               <article-title>Prompt engineering for zero-shot and few-shot defect detection and classification using a visual-language pretrained model</article-title>
               <source>Comput.-Aided Civ. Infrastruct. Eng.</source>
               <volume>38</volume>
               <issue>11</issue>
               <fpage>1536</fpage>
               <lpage>1554</lpage>
               <year>2023</year>
               <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1111/mice.12954">https://doi.org/10.1111/mice.12954</ext-link>
            </element-citation>
         </ref>
         <ref id="B38">
            <label>38</label>
            <mixed-citation>[]	Bozkurt, A., and Sharma, R.C., Generative AI and prompt engineering: the art of whispering to let the genie out of the algorithmic world, Asian J. Distance Educ., 2023.</mixed-citation>
            <element-citation publication-type="journal">
               <person-group person-group-type="author">
                  <name>
                     <surname>Bozkurt</surname>
                     <given-names>A.</given-names>
                  </name>
                  <name>
                     <surname>Sharma</surname>
                     <given-names>R.C.</given-names>
                  </name>
               </person-group>
               <article-title>Generative AI and prompt engineering: the art of whispering to let the genie out of the algorithmic world</article-title>
               <source>Asian J. Distance Educ.</source>
               <year>2023</year>
            </element-citation>
         </ref>
         <ref id="B39">
            <label>39</label>
            <mixed-citation>[]	Elsevier, Whand choose Scopus - Scopus benefits. [online]. Accessed: July 31<sup>th</sup> of 2020] Available at: <comment>Available at: <ext-link ext-link-type="uri"
                            xlink:href="https://www.elsevier.com/solutions/scopus/whand-choose-scopus">https://www.elsevier.com/solutions/scopus/whand-choose-scopus</ext-link>
               </comment>
            </mixed-citation>
            <element-citation publication-type="report">
               <person-group person-group-type="author">
                  <collab>Elsevier</collab>
               </person-group>
               <source>Whand choose Scopus - Scopus benefits</source>
               <comment>online</comment>
               <date-in-citation content-type="access-date" iso-8601-date="2020-07-31">July 31th of 2020</date-in-citation>
               <comment>Available at: <ext-link ext-link-type="uri"
                            xlink:href="https://www.elsevier.com/solutions/scopus/whand-choose-scopus">https://www.elsevier.com/solutions/scopus/whand-choose-scopus</ext-link>
               </comment>
            </element-citation>
         </ref>
         <ref id="B40">
            <label>40</label>
            <mixed-citation>[]	Shieh, J., Best practices for prompt engineering with OpenAI API, OpenIA, [online]. Sept. 2023. Accessed: October 3<sup>rd</sup> of 2023. Available at: <comment>Available at: <ext-link ext-link-type="uri"
                            xlink:href="https://help.openai.com/en/articles/6654000-best-practices-for-prompt-engineering-with-openai-api">https://help.openai.com/en/articles/6654000-best-practices-for-prompt-engineering-with-openai-api</ext-link>
               </comment>
            </mixed-citation>
            <element-citation publication-type="book">
               <person-group person-group-type="author">
                  <name>
                     <surname>Shieh</surname>
                     <given-names>J.</given-names>
                  </name>
               </person-group>
               <source>Best practices for prompt engineering with OpenAI API, OpenIA</source>
               <comment>online</comment>
               <month>09</month>
               <year>2023</year>
               <date-in-citation content-type="access-date" iso-8601-date="2023-10-03">October 3rd of 2023</date-in-citation>
               <comment>Available at: <ext-link ext-link-type="uri"
                            xlink:href="https://help.openai.com/en/articles/6654000-best-practices-for-prompt-engineering-with-openai-api">https://help.openai.com/en/articles/6654000-best-practices-for-prompt-engineering-with-openai-api</ext-link>
               </comment>
            </element-citation>
         </ref>
         <ref id="B41">
            <label>41</label>
            <mixed-citation>[]	Yao, S. et al., Tree of thoughts: deliberate problem solving with large language models, ArXiv Prepr. ArXiv230510601, 2023.</mixed-citation>
            <element-citation publication-type="book">
               <person-group person-group-type="author">
                  <name>
                     <surname>Yao</surname>
                     <given-names>S.</given-names>
                  </name>
                  <etal/>
               </person-group>
               <source>Tree of thoughts: deliberate problem solving with large language models</source>
               <comment>ArXiv Prepr. ArXiv230510601</comment>
               <year>2023</year>
            </element-citation>
         </ref>
         <ref id="B42">
            <label>42</label>
            <mixed-citation>[]	Henrickson, L., and Meroño-Peñuela, A., Prompting meaning: a hermeneutic approach to optimising prompt engineering with ChatGPT, AI Soc., 2023, DOI: <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1007/s00146-023-01752-8">https://doi.org/10.1007/s00146-023-01752-8</ext-link>.</mixed-citation>
            <element-citation publication-type="book">
               <person-group person-group-type="author">
                  <name>
                     <surname>Henrickson</surname>
                     <given-names>L.</given-names>
                  </name>
                  <name>
                     <surname>Meroño-Peñuela</surname>
                     <given-names>A.</given-names>
                  </name>
               </person-group>
               <source>Prompting meaning: a hermeneutic approach to optimising prompt engineering with ChatGPT</source>
               <publisher-name>AI Soc</publisher-name>
               <year>2023</year>
               <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1007/s00146-023-01752-8">https://doi.org/10.1007/s00146-023-01752-8</ext-link>
            </element-citation>
         </ref>
         <ref id="B43">
            <label>43</label>
            <mixed-citation>[]	Google, ‎Bard - Chat based AI tool from Google, powered band PaLM 2. [online]. Accessed: October 4<sup>th</sup> of 2023. Available at: <comment>Available at: <ext-link ext-link-type="uri" xlink:href="https://bard.google.com">https://bard.google.com</ext-link>
               </comment>
            </mixed-citation>
            <element-citation publication-type="book">
               <person-group person-group-type="author">
                  <collab>Google</collab>
               </person-group>
               <source>‎Bard - Chat based AI tool from Google, powered band PaLM 2</source>
               <comment>online</comment>
               <date-in-citation content-type="access-date" iso-8601-date="2023-10-04">October 4th of 2023</date-in-citation>
               <comment>Available at: <ext-link ext-link-type="uri" xlink:href="https://bard.google.com">https://bard.google.com</ext-link>
               </comment>
            </element-citation>
         </ref>
         <ref id="B44">
            <label>44</label>
            <mixed-citation>[]	Microsoft, Your AI-Powered Copilot for the Web | Microsoft Bing. [online]. Accessed: October 4<sup>th</sup> of 2023. Available at: <comment>Available at: <ext-link ext-link-type="uri" xlink:href="https://www.microsoft.com/en-us/bing">https://www.microsoft.com/en-us/bing</ext-link>
               </comment>
            </mixed-citation>
            <element-citation publication-type="book">
               <person-group person-group-type="author">
                  <collab>Microsoft</collab>
               </person-group>
               <source>Your AI-Powered Copilot for the Web</source>
               <publisher-name>Microsoft Bing</publisher-name>
               <comment>online</comment>
               <date-in-citation content-type="access-date" iso-8601-date="2023-10-04">October 4th of 2023</date-in-citation>
               <comment>Available at: <ext-link ext-link-type="uri" xlink:href="https://www.microsoft.com/en-us/bing">https://www.microsoft.com/en-us/bing</ext-link>
               </comment>
            </element-citation>
         </ref>
         <ref id="B45">
            <label>45</label>
            <mixed-citation>[]	Spasic, A.J., and Jankovic, D.S., Using ChatGPT standard prompt engineering techniques in lesson preparation: role, instructions and seed-word prompts, in: 2023 58<sup>th</sup> International Scientific Conference on Information, Communication and Energand Sandstems and Technologies, ICEST 2023 - Proceedings, 2023, pp. 47-50. DOI: <ext-link ext-link-type="uri"
                         xlink:href="https://doi.org/10.1109/ICEST58410.2023.10187269">https://doi.org/10.1109/ICEST58410.2023.10187269</ext-link>.</mixed-citation>
            <element-citation publication-type="confproc">
               <person-group person-group-type="author">
                  <name>
                     <surname>Spasic</surname>
                     <given-names>A.J.</given-names>
                  </name>
                  <name>
                     <surname>Jankovic</surname>
                     <given-names>D.S.</given-names>
                  </name>
               </person-group>
               <source>Using ChatGPT standard prompt engineering techniques in lesson preparation: role, instructions and seed-word prompts</source>
               <conf-name>2023 58th International Scientific Conference on Information, Communication and Energand Sandstems and Technologies, ICEST 2023 - Proceedings</conf-name>
               <conf-date>2023</conf-date>
               <fpage>47</fpage>
               <lpage>50</lpage>
               <ext-link ext-link-type="uri"
                         xlink:href="https://doi.org/10.1109/ICEST58410.2023.10187269">https://doi.org/10.1109/ICEST58410.2023.10187269</ext-link>
            </element-citation>
         </ref>
         <ref id="B46">
            <label>46</label>
            <mixed-citation>[]	Rudin, C., and Radin, J., Whand are we using black box models in AI when we don't need to? A lesson from an explainable AI competition, Harv. Data Sci. Rev., 1.2, p. 9, 2019. DOI: <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1162/99608f92.5a8a3a3d">https://doi.org/10.1162/99608f92.5a8a3a3d</ext-link>.</mixed-citation>
            <element-citation publication-type="journal">
               <person-group person-group-type="author">
                  <name>
                     <surname>Rudin</surname>
                     <given-names>C.</given-names>
                  </name>
                  <name>
                     <surname>Radin</surname>
                     <given-names>J.</given-names>
                  </name>
               </person-group>
               <article-title>Whand are we using black box models in AI when we don't need to? A lesson from an explainable AI competition</article-title>
               <source>Harv. Data Sci. Rev.</source>
               <volume>1.2</volume>
               <fpage>9</fpage>
               <lpage>9</lpage>
               <year>2019</year>
               <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1162/99608f92.5a8a3a3d">https://doi.org/10.1162/99608f92.5a8a3a3d</ext-link>
            </element-citation>
         </ref>
         <ref id="B47">
            <label>47</label>
            <mixed-citation>[]	Salkin, H.M., and De Kluandver, C.A., The knapsack problem: a surveand, Nav. Res. Logist. Q., 22(1), pp. 127-144, 1975. DOI: <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1002/nav.3800220110">https://doi.org/10.1002/nav.3800220110</ext-link>.</mixed-citation>
            <element-citation publication-type="journal">
               <person-group person-group-type="author">
                  <name>
                     <surname>Salkin</surname>
                     <given-names>H.M.</given-names>
                  </name>
                  <name>
                     <surname>De Kluandver</surname>
                     <given-names>C.A</given-names>
                  </name>
               </person-group>
               <article-title>The knapsack problem: a surveand</article-title>
               <source>Nav. Res. Logist. Q.</source>
               <volume>22</volume>
               <issue>1</issue>
               <fpage>127</fpage>
               <lpage>144</lpage>
               <year>1975</year>
               <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1002/nav.3800220110">https://doi.org/10.1002/nav.3800220110</ext-link>
            </element-citation>
         </ref>
         <ref id="B48">
            <label>48</label>
            <mixed-citation>[]	Bisong, E., Google Colaboratorand, en Building Machine Learning and Deep Learning Models on Google Cloud Platform: A Comprehensive Guide for Beginners, E. Bisong, Ed., Berkeleand, CA: Apress, 2019, pp. 59-64. DOI: <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1007/978-1-4842-4470-8_7">https://doi.org/10.1007/978-1-4842-4470-8_7</ext-link>.</mixed-citation>
            <element-citation publication-type="book">
               <person-group person-group-type="author">
                  <name>
                     <surname>Bisong</surname>
                     <given-names>E.</given-names>
                  </name>
               </person-group>
               <source>Google Colaboratorand, en Building Machine Learning and Deep Learning Models on Google Cloud Platform: A Comprehensive Guide for Beginners</source>
               <person-group person-group-type="editor">
                  <name>
                     <surname>Bisong</surname>
                     <given-names>E.</given-names>
                  </name>
               </person-group>
               <publisher-name>Berkeleand, CA</publisher-name>
               <year>2019</year>
               <fpage>59</fpage>
               <lpage>64</lpage>
               <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1007/978-1-4842-4470-8_7">https://doi.org/10.1007/978-1-4842-4470-8_7</ext-link>
            </element-citation>
         </ref>
      </ref-list>
      <fn-group>
         <fn id="fn1" fn-type="other">
            <label>How to cite:</label>
            <p> Velásquez-Henao, J.D., Franco-Cardona, C.J. and Cadavid-Higuita, L., Prompt Engineering: a methodology for optimizing interactions with AI-Language Models in the field of engineering. DYNA, 90 (230), Especial Conmemoración 90 años, pp. 9-17, Noviembre, 2023.</p>
         </fn>
      </fn-group>
      <fn-group>
         <fn id="fn2" fn-type="other">
            <label>J.D. Velásquez-Henao</label>
            <p> earned his BSc. in Civil Engineering in 1994, an MSc. in Systems Engineering in 1997, and a PhD in Energy Systems in 2009, all from the Universidad Nacional de Colombia in Medellin, Colombia. From 1994 to 1999, he worked in electricity utilities and consulting companies on the power sector. In 2000, he joined the Universidad Nacional de Colombia in Medellin and was appointed as a Full Professor of Computer Science by 2012. Between 2004 and 2006, he served as an Associate Dean for Research, and from 2009 to 2018, he led the Computing and Decision Science Department at the Facultad de Minas, Universidad Nacional de Colombia, Medellin Campus. His research and publications span simulation, modeling, optimization, and forecasting in energy markets. He specializes in nonlinear time-series analysis and forecasting using statistical and computational intelligence techniques, numerical optimization with metaheuristics, and analytics and data science. He currently instructs postgraduate courses in data science, machine learning, and big data in the Analytics program, emphasizing Python programming. ORCID: 0000-0003-3043-3037</p>
         </fn>
         <fn id="fn3" fn-type="other">
            <label>C. J. Franco</label>
            <p> is a Full Professor at the Universidad Nacional de Colombia, Medellín Campus, Medellín, Colombia. He earned his MSc. in 1996 and his PhD in 2002, both from the Universidad Nacional de Colombia. Currently, his research interests include energy markets, system dynamics, and complexity. ORCID: 0000-0002-7750-857X</p>
         </fn>
         <fn id="fn4" fn-type="other">
            <label>L. Cadavid</label>
            <p> earned a BSc. in Management Engineering in 2006, an MSc in Systems Engineering in 2010, and a PhD in Systems Engineering in 2015, all from the Universidad Nacional de Colombia, Medellín, Colombia. She currently serves as an Assistant Professor at the Universidad Nacional de Colombia, Medellín Campus. Her primary research focuses and publications include the diffusion of innovations, agent-based modeling and simulation, and clean energy. ORCID: 0000-0002-6025-5940</p>
         </fn>
      </fn-group>
   </back>
</article>