<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE article
  PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.0 20120330//EN" "http://jats.nlm.nih.gov/publishing/1.0/JATS-journalpublishing1.dtd">
<article article-type="research-article" dtd-version="1.0" specific-use="sps-1.6" xml:lang="en" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">
	<front>
		<journal-meta>
			<journal-id journal-id-type="publisher-id">dyna</journal-id>
			<journal-title-group>
				<journal-title>DYNA</journal-title>
				<abbrev-journal-title abbrev-type="publisher">Dyna rev.fac.nac.minas</abbrev-journal-title>
			</journal-title-group>
			<issn pub-type="ppub">0012-7353</issn>
			<publisher>
				<publisher-name>Universidad Nacional de Colombia</publisher-name>
			</publisher>
		</journal-meta>
		<article-meta>
			<article-id pub-id-type="doi">10.15446/dyna.v85n205.69470</article-id>
			<article-categories>
				<subj-group subj-group-type="heading">
					<subject>Artículos</subject>
				</subj-group>
			</article-categories>
			<title-group>
				<article-title>Characterization of postures to analyze people’s emotions using Kinect technology</article-title>
				<trans-title-group xml:lang="es">
					<trans-title>Caracterización de posturas para el análisis de emociones de personas, por medio de la tecnología Kinect.</trans-title>
				</trans-title-group>
			</title-group>
			<contrib-group>
				<contrib contrib-type="author">
					<name>
						<surname>Monsalve-Pulido</surname>
						<given-names>Julián Alberto</given-names>
					</name>
					<xref ref-type="aff" rid="aff1"><sup>a</sup></xref>
				</contrib>
				<contrib contrib-type="author">
					<name>
						<surname>Parra-Rodríguez</surname>
						<given-names>Carlos Alberto</given-names>
					</name>
					<xref ref-type="aff" rid="aff2"><sup>b</sup></xref>
				</contrib>
			</contrib-group>
			<aff id="aff1">
				<label>a</label>
				<institution content-type="original"> Universidad Santo Tomás Tunja, Colombia. julian.monsalve@usantoto.edu.co</institution>
				<institution content-type="normalized">Universidad Santo Tomás</institution>
				<institution content-type="orgname">Universidad Santo Tomás</institution>
				<addr-line>
					<named-content content-type="city">Tunja</named-content>
				</addr-line>
				<country country="CO">Colombia</country>
				<email>julian.monsalve@usantoto.edu.co</email>
			</aff>
			<aff id="aff2">
				<label>b</label>
				<institution content-type="original"> Pontificia Universidad Javeriana, Bogotá, Colombia. carlos.parra@javeriana.edu.co</institution>
				<institution content-type="normalized">Pontificia Universidad Javeriana</institution>
				<institution content-type="orgname">Pontificia Universidad Javeriana</institution>
				<addr-line>
					<named-content content-type="city">Bogotá</named-content>
				</addr-line>
				<country country="CO">Colombia</country>
				<email>carlos.parra@javeriana.edu.co</email>
			</aff>
			<pub-date pub-type="epub-ppub">
				<season>Apr-Jun</season>
				<year>2018</year>
			</pub-date>
			<volume>85</volume>
			<issue>205</issue>
			<fpage>256</fpage>
			<lpage>263</lpage>
			<history>
				<date date-type="received">
					<day>15</day>
					<month>12</month>
					<year>2017</year>
				</date>
				<date date-type="rev-recd">
					<day>10</day>
					<month>05</month>
					<year>2018</year>
				</date>
				<date date-type="accepted">
					<day>29</day>
					<month>05</month>
					<year>2018</year>
				</date>
			</history>
			<permissions>
				<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by-nc-nd/4.0/" xml:lang="en">
					<license-p>This is an open-access article distributed under the terms of the Creative Commons Attribution License</license-p>
				</license>
			</permissions>
			<abstract>
				<title>Abstract </title>
				<p>This article synthesizes the research undertaken into the use of classification techniques that characterize people's positions, the objective being to identify emotions (astonishment, anger, happiness and sadness). We used a three-phase exploratory research methodology, which resulted in technological appropriation and a model that classified people’s emotions (in standing position) using the Kinect Skeletal Tracking algorithm, which is a free software. We proposed a feature vector for pattern recognition using classification techniques such as SVM, KNN, and Bayesian Networks for 17,882 pieces of data that were obtained in a 14-person training sample. As a result, we found that that the KNN algorithm has a maximum effectiveness of 89.0466%, which surpasses the other selected algorithms.</p>
			</abstract>
			<trans-abstract xml:lang="es">
				<title>Resumen</title>
				<p>El presente artículo sintetiza la investigación realizada en el uso de técnicas de clasificación para un proceso de caracterización de posturas de personas que tiene como objetivo la identificación de emociones (Asombro, Enfado, Felicidad y Tristeza). En este proyecto de investigación fue necesario utilizar una metodología de investigación exploratoria en tres fases donde el resultado es una apropiación tecnológica y un modelo de clasificación de emociones en personas en posición de pie, usando el algoritmo de Skeletal Tracking de Kinect basado en software libre. Se propuso un vector de características para el reconocimiento de patrones usando técnicas de clasificación como SVM, KNN y Redes Bayesianas en 17.882 datos obtenidos en una muestra de entrenamiento de 14 personas. Como resultado se evidenció que el algoritmo KNN tiene una efectividad máxima del 89.0466% superando a los demás algoritmos seleccionados. </p>
			</trans-abstract>
			<kwd-group xml:lang="en">
				<title>Keywords:</title>
				<kwd>analysis of emotions</kwd>
				<kwd>recognition of postures</kwd>
				<kwd>free software</kwd>
				<kwd>Kinect, KNN</kwd>
			</kwd-group>
			<kwd-group xml:lang="es">
				<title>Palabras clave:</title>
				<kwd>análisis de emociones</kwd>
				<kwd>reconocimiento de posturas</kwd>
				<kwd>software libre</kwd>
				<kwd>Kinect</kwd>
				<kwd>KNN</kwd>
			</kwd-group>
			<counts>
				<fig-count count="10"/>
				<table-count count="2"/>
				<equation-count count="8"/>
				<ref-count count="26"/>
				<page-count count="8"/>
			</counts>
		</article-meta>
	</front>
	<body>
		<sec sec-type="intro">
			<title>1. Introduction</title>
			<p>Human-machine interaction has been evolving over recent years, in particular Natural User Interface (NUI), which aims to integrate the user interaction with a computer system using natural perception. NUI can be manipulated depending on user-needs through direct or intermediate devices that create a transparent and discreet perception [<xref ref-type="bibr" rid="B1">1</xref>,<xref ref-type="bibr" rid="B2">2</xref>]. This research appropriates the Kinect technology as a Natural User Interface, and the objective is to characterize people’s positions to identify emotions.</p>
			<p>The application of this research focuses on the development of a regional problem in Boyacá, Colombia. Problems have been identified in the tourism sector due to the absence of effective mechanisms to promote and market tourist destinations resulting from the poor coordination and execution of good strategies to boost the sector. To help the development of tourism in the region it is necessary to carry out an analysis of experiential tourism as this is a new form of tourism that is based on the emotions and experiences tourists experience through interacting with the destination; it can be defined as an extraordinary personal experience that combines both tangible aspects that are represented in tourism products, and intangible aspects such as freedom, security, tranquility, and relaxation [<xref ref-type="bibr" rid="B3">3</xref>,<xref ref-type="bibr" rid="B4">4</xref>]. As digital media is used in all aspects of a tourist’s experience, it is necessary to create automatic interpretation mechanisms that qualify emotions or feelings about a tourist product or service. These mechanisms are part of the Natural Language Processing area that can be defined as: <italic>&quot;Discipline focused on the design and implementation of computer applications that communicate with people through the use of natural language&quot;</italic> [<xref ref-type="bibr" rid="B5">5</xref>]. Also, when applying sentiment analysis opinions, the most complete definition is the following: It is a <italic>&quot;Set of computational techniques for the extraction, classification, understanding and evaluation of opinions expressed in sources published on the Internet, comments on web portals and other content generated by users&quot;</italic> [<xref ref-type="bibr" rid="B6">6</xref>].</p>
			<p>The data to be analyzed come from various sources (social networks, travel planners, blogs, etc.) and from different types of data (text, images, sounds, videos, and numerical values) for which it is necessary to use multimodal methodologies to perform a classification and thus identify a good polarity. As a solution to the problem, we propose creating a multimodal model to generally analyze feelings or by using a fusion process that would integrate the results of text classification, postures, and quantitative qualifications of a tourist product or service. This article only documents the results of recognizing positions that, in the future will be integrated into the multimodal model by interpreting the resulting vector through a merger at the decision or identity level. In <xref ref-type="fig" rid="f1">Fig. 1</xref>, the multimodal model is described.</p>
			<p>
				<fig id="f1">
					<label>Figure 1</label>
					<caption>
						<title>Integration of posture recognition to the multimodal model</title>
					</caption>
					<graphic xlink:href="0012-7353-dyna-85-205-00256-gf1.jpg"/>
					<attrib><bold>Source:</bold> Authors</attrib>
				</fig>
			</p>
			<p>The diversity of information and the volume of data that must be analyzed is due to the mass storage of digital media information that is generated by people and electronic devices. Digital’s January 2016 report, for example, mentions that 46% of the global population has Internet access (3,419 billion people), and 31% (2,307 billion people) are active users of social networks. Facebook is top on the list with 1,550 million users, Qzone has 653 million users, Tumblr has 555 million users, Instagram has 400 million, Twitter has 320 million, baidu has 300 million, sina weibo has 222 million, and YY has 122 million users. Facebook users generate more than 500 Terabytes of content each day; there are more than 2,700 million &quot;likes&quot; and around 300 million photographs [<xref ref-type="bibr" rid="B7">7</xref>]. This data source is desired by the large marketing industries whose objective is to undertake a large-scale analysis of structured and unstructured information using BigData or Data Mining techniques.</p>
			<p>Identifying emotions is a complex process that is the result of physical and psychological reactions that develop in behavior such as thinking and creating ambiguous natural language variables that are difficult to interpret such as surprise, anxiety, fear, and irony [<xref ref-type="bibr" rid="B8">8</xref>,<xref ref-type="bibr" rid="B9">9</xref>]. Emotions influence different ways of acting depending on our thoughts. Unexpected events that affect our normal behavior can lead to changes in behavior and decision making. To develop applications, human emotions are important in terms of usability, especially in intelligent environments, where emotions influence cognition, and, therefore, intelligence. This is particularly true when social decisions [<xref ref-type="bibr" rid="B8">8</xref>,<xref ref-type="bibr" rid="B10">10</xref>] are made. Therefore, the research focuses on identifying emotions (happiness, amazement, anger, and sadness) based on supervised postures.</p>
			<p>The article begins with an explanation of the methodology that was applied to the investigation followed by a brief description of the state of art, and then some pattern recognition techniques are considered. Finally, we present the conclusions of the results obtained.</p>
		</sec>
		<sec>
			<title>2. Related work</title>
			<p>Non-verbal communication is the communication process in which messages are sent and received without words: through signs, and gestures [<xref ref-type="bibr" rid="B11">11</xref>]. This has no syntactic structure, so sequences of hierarchical constituents cannot be analyzed. The first impression a person makes occurs within seven seconds, and 93% of the information we communicate depends on our body language. A conversation is constituted by two parts: the verbal or conscious and the non-verbal or unconscious and emotional. This research only analyzes unconscious non-verbal conversation and focuses on body positions where gestures communicate feelings, emotions, intentions in a fraction of a second using Kinect technology [<xref ref-type="bibr" rid="B12">12</xref>].</p>
			<p>When identifying emotions, several authors have investigated models that combine different areas such as psychology, biology, and neuroscience; their results include how emotions and intelligence are combined. One example is the Sentic Computing [<xref ref-type="bibr" rid="B8">8</xref>] research for which investigators have developed a 3D sandglass model that represents affective states through labels. Four independent but related affective dimensions are used that can potentially describe the full range of emotional experiences rooted in any of us.</p>
			<p>
				<fig id="f2">
					<label>Figure 2</label>
					<caption>
						<title>3D model and emotions hourglass</title>
					</caption>
					<graphic xlink:href="0012-7353-dyna-85-205-00256-gf2.jpg"/>
					<attrib><bold>Source:</bold> [<xref ref-type="bibr" rid="B8">8</xref>]</attrib>
				</fig>
			</p>
			<p>Sentiment analysis in digital environments has changed the way of analyzing user-opinions through quantitative or qualitative indicators that qualify a product or service from which information generated by consumers is extracted in large volumes freely and spontaneously during the process of buying a product or service (the before, during, or at the end). In most cases, opinion-mining is applied to large volumes of information, specifically in text analysis, where approaches are used such as subjective lexicon, the use of the N-Gram model, and machine learning [<xref ref-type="bibr" rid="B13">13</xref>]. In sentiment analysis research, several methodologies that function as a basis for an effective process have been proposed. An example is the one proposed by [<xref ref-type="bibr" rid="B14">14</xref>] which includes five steps to develop an effective analysis. These steps are: lexicon generation, subjectivity detection, polarity detection, sentimental structure, and sentiment visualization.</p>
			<p>Information that is currently stored in diverse sources is multimodal, and the combination of text, image, video, or sound generates a broader problem when analyzing feelings, and it is necessary to create and identify models or recognition techniques for a multimodal classification. The ability to perform multimodal fusion is an important prerequisite to successfully implement agent-user interaction. One of the main obstacles to multimodal fusion is the development and specification of a methodology that integrates cognitive and affective information from variant sources at diverse time scales and with different measurement values [<xref ref-type="bibr" rid="B15">15</xref>]. There are fusion techniques that will help with more effective interpretation. One example is fusion on an entity level, which combines the characteristics extracted from each input channel in a vector of the conjunction before any classification operation is performed in any fusion [<xref ref-type="bibr" rid="B16">16</xref>].</p>
			<p>
				<fig id="f3">
					<label>Figure 3</label>
					<caption>
						<title>Fusion at the identity level</title>
					</caption>
					<graphic xlink:href="0012-7353-dyna-85-205-00256-gf3.png"/>
					<attrib><bold>Source:</bold> Adapted from [<xref ref-type="bibr" rid="B15">15</xref>].</attrib>
				</fig>
			</p>
			<p>This presents a problem that integrates highly disparate input characteristics in the synchronization of multiple inputs by unnecessary repetitions and increasing the computational resource [<xref ref-type="bibr" rid="B15">15</xref>].</p>
			<p>Moreover, when considering merger at the decision level, each modality is modeled and classified independently. Unimodal results are combined at the end of the process by choosing suitable metrics such as expert rules and simple operators, including majority votes, sums, products, and statistical weighting. This merger has a decision-level benefit as the preferred method of data fusion since the different classifier errors tend not to be correlated, and the methodology is independent of the characteristics [<xref ref-type="bibr" rid="B17">17</xref>].</p>
			<p>
				<fig id="f4">
					<label>Figure 4</label>
					<caption>
						<title>Fusion at the decision level</title>
					</caption>
					<graphic xlink:href="0012-7353-dyna-85-205-00256-gf4.png"/>
					<attrib><bold>Source:</bold> Adapted from [<xref ref-type="bibr" rid="B17">17</xref>]</attrib>
				</fig>
			</p>
			<p>The investigation in [<xref ref-type="bibr" rid="B18">18</xref>] is an example of the above; the authors developed a new approach to recognizing bimodal emotions based on facial expression and speech and used the fusion method of range regression (SKRRR). Furthermore, in the research carried out by [<xref ref-type="bibr" rid="B19">19</xref>], a methodology was developed to analyze multimodal feelings, which aims to collect feelings in web videos and then demonstrate a model that integrates audio, visuals, and text by fusing characteristics and extracting affective information in multiple modalities. The results obtained are accurate by almost 80%; this figure surpasses all the vanguard systems by more than 20%.</p>
			<p>An evolution of NUI was the creation of the Kinect sensor, for which the initial functionality evolved and improved the user experience in video games by using a natural interaction such as movement and voice. This has been used in research in various areas, and important results have been obtained in image and sound recognition [<xref ref-type="bibr" rid="B20">20</xref>]. The Kinect sensor incorporates several detection components; and it contains a depth sensor, a color camera, and a matrix of four microphones that provide the entire body with 3D motion capture, facial recognition, and speech recognition capabilities.</p>
			<p>Kinect technology has been used in the identification of emotions during multimodal analysis. [<xref ref-type="bibr" rid="B22">22</xref>], for example, recorded video and depth images of some students in a classroom with Kinect technology; the data were processed with techniques that tracked posture and face-to-face gestures. The results obtained related to the tutor´s identification of perception that was implicitly found in students´ physical demand and frustration. In addition, posture and gesture were correlated with student´s cognitive-affective states that tutors perceived through the implicit affective channel. [<xref ref-type="bibr" rid="B23">23</xref>] used Kinect technology to identify consumer reactions in a food testing kiosk with a multimodal system programmed to recognize affection to classify if a consumer likes or dislikes a tested product. The consumer´s facial expression, body posture, hand gestures, and voice were analyzed after testing the product. The result was that a classifier was created through an algorithm that assigned emotion templates using vector support machines.</p>
			<p>| </p>
			<p>
				<fig id="f5">
					<label>Figure 5</label>
					<caption>
						<title>Physical structure of the camera </title>
					</caption>
					<graphic xlink:href="0012-7353-dyna-85-205-00256-gf5.jpg"/>
					<attrib><bold>Source:</bold> [<xref ref-type="bibr" rid="B20">20</xref>, <xref ref-type="bibr" rid="B21">21</xref>]. </attrib>
				</fig>
			</p>
		</sec>
		<sec sec-type="materials|methods">
			<title>3. Materials and methods</title>
			<p>To undertake this research, we proposed a three-phase methodology, and the main objective is to characterize emotions through postures. The first phase included detailing the state of the art regarding previous research in the following areas: feelings analysis using Kinect technology, feelings analysis in psychology, technical analysis of the Kinect device, and social analysis when identifying emotions and feelings. The second phase included the creation of a model to collect and interpret information postures generated by Kinect technology, and the characterization of four feelings using pattern recognition algorithms. Phase three concludes with the results obtained and with a proposal for future work.</p>
			<p>One of the main functions of the Kinect sensor &quot;Skeletal tracking&quot; was used during the research. This is based on a skeleton tracking algorithm that manages to identify people´s body parts who are in the sensor´s field of vision. Using this algorithm, we can obtain points that refer to a person´s body parts and then identify gestures and / or postures. The sensor identifies twenty reference points (head, center of the shoulder, shoulder right, shoulder left, elbow right, elbow left, right wrist, left wrist, right hand, left hand, spine, center of hip, left hip, right hip, right knee, left knee, right ankle, left ankle, right foot, and left foot).</p>
			<p>To track the Kinect skeleton, the depth images must be processed, human forms must be detected, and the body parts of the user in the image must be identified. Each body part is abstracted as a 3D coordinate called an articulation; a set of articulations forms a virtual skeleton for each of Kinect´s depth images, that is, 30 skeletons are obtained per second. </p>
			<p>The articulations generated vary according to the Kinect library used [<xref ref-type="bibr" rid="B20">20</xref>]. For this research, the free distribution framework OpenNI (Open Natural Interaction) was used with an open source license and multiplatform development. This supports a middleware that implements characteristics of complete analysis and skeletal follow-up and an analysis of the position and follow-up of the hands or gesture recognition. The framework incorporates the NITE module. This module integrates a library that identifies each skeleton with its 15 articulations ai = {xi, yi, zi} with zi&gt; 0 (see <xref ref-type="fig" rid="f6">Fig. 6</xref>). The coordinates of this are expressed in millimeters with respect to the position of Kinect in the scene. In the Microsoft SDK and the Xbox console, five joints are added (the ankles, the wrists, and the center of the hip). The configuration and programming of the framework used was made on a GNU / Linux platform with java programming language using a three-layer architecture to integrate the storage and query of information in a posture characteristics model. Weka libraries are integrated so we could use the recognition algorithms and a Postgres database.</p>
			<p>
				<fig id="f6">
					<label>Figure 6</label>
					<caption>
						<title>Kinect benchmarks in OpenNI</title>
					</caption>
					<graphic xlink:href="0012-7353-dyna-85-205-00256-gf6.jpg"/>
					<attrib><bold>Source:</bold> Adapted from [<xref ref-type="bibr" rid="B20">20</xref>]</attrib>
				</fig>
			</p>
			<p>This research identified four feelings (happiness, sadness, anger, and amazement), which are described in <xref ref-type="fig" rid="f7">Fig. 7</xref>. Happiness is a feeling of fullness, joy, fulfillment, and enjoyment; sadness a feeling of emptiness, restlessness, decay, and demotivation; anger a feeling of annoyance and offense; and amazement a feeling of discovering something unforeseen or unexpected.</p>
			<p>
				<fig id="f7">
					<label>Figure 7</label>
					<caption>
						<title>Emotions for classifier training.</title>
					</caption>
					<graphic xlink:href="0012-7353-dyna-85-205-00256-gf7.jpg"/>
					<attrib><bold>Source:</bold> Authors</attrib>
				</fig>
			</p>
			<p>To construct of the classifier, we tested unsupervised algorithms for pattern recognition: one of the main ones evaluated was the Support Vector Machines (SVM). The main objective of this method is to find an optimal hyperplane margin using support vectors capable of forming a decision border around the learning data domain. The hyperplane is defined by: </p>
			<p>
				<disp-formula id="e1">
					<graphic xlink:href="0012-7353-dyna-85-205-00256-e1.png"/>
				</disp-formula>
			</p>
			<p>
				<disp-formula id="e2">
					<graphic xlink:href="0012-7353-dyna-85-205-00256-e2.jpg"/>
				</disp-formula>
			</p>
			<p>where w is the normal weight vector of the separation hyperplane, b is the partial term, and x is the vector of n characteristics. The classification of a new individual x (i) is given by his or her position relative to that of the Hyperplane. SVM is based on the use of kernel functions that allow optimal separation data to be obtained. These are some of the kernel examples used in SVM [<xref ref-type="bibr" rid="B24">24</xref>]:</p>
			<p>
				<disp-formula id="e3">
					<graphic xlink:href="0012-7353-dyna-85-205-00256-e3.jpg"/>
				</disp-formula>
			</p>
			<p>
				<disp-formula id="e4">
					<graphic xlink:href="0012-7353-dyna-85-205-00256-e4.jpg"/>
				</disp-formula>
			</p>
			<p>
				<disp-formula id="e5">
					<graphic xlink:href="0012-7353-dyna-85-205-00256-e5.jpg"/>
				</disp-formula>
			</p>
			<p>
				<disp-formula id="e6">
					<graphic xlink:href="0012-7353-dyna-85-205-00256-e6.jpg"/>
				</disp-formula>
			</p>
			<p>
				<disp-formula id="e7">
					<graphic xlink:href="0012-7353-dyna-85-205-00256-e7.jpg"/>
				</disp-formula>
			</p>
			<p>Other classification methods explored are the nearest k-neighbor, the KNN target given a vector x (i), and a set N of neighbors marked with mN. The task of the classifier is to predict the class tag of x (i) based on the class labels of set N by majority vote. In KNN, the most important parameter is the number of neighbors k. The choice of k is essential to build the nearest k-neighbors model. Thus, k can strongly influence the performance of generalization. The value of k must be large enough to minimize the probability of error, but also reasonably low compared to mN or the size of the set N [<xref ref-type="bibr" rid="B25">25</xref>]. </p>
			<p>Furthermore, the Bayesian classifier is a supervised learning algorithm and a statistical method based on Bayes theorem. Given a sample x (i) and a set of training samples S, each with its class label Cl with l ∈ [1, L] and L being the number of classes, the classifier predicts that x (i) belongs to the class with the highest posterior probability:</p>
			<p>
				<disp-formula id="e8">
					<graphic xlink:href="0012-7353-dyna-85-205-00256-e8.png"/>
				</disp-formula>
			</p>
		</sec>
		<sec sec-type="results">
			<title>4. Results</title>
			<p>To develop the system, we used free software tools, and the objective was to integrate a scalable solution to continue with the research without any limitations regarding proprietary licenses. The following tools were used in the model: was Processing ; which is the core of flexible and adapted solutions to learn about visual arts in digital environments; OpenNI, as previously mentioned, stands for Open Natural Interaction, and is a tool that focuses on the certification and improvement of the interoperability of the natural user interface and the organic user interface for natural interaction devices for device applications such as the Kinect [<xref ref-type="bibr" rid="B20">20</xref>]. To recognize the skeleton of the 15 reference points, the Simple-Openni library was used and libraries extracted from WEKA<xref ref-type="fn" rid="fn5"><sup>5</sup></xref> were used to apply the pattern recognition algorithms. The model´s integral solution was developed with the Eclipse development interface using the Java language and libraries. </p>
			<p>When constructing the model, we identified the existing protocols to collect information on people´s movements using a video device; the Protocol of positioning marked Davis [<xref ref-type="bibr" rid="B26">26</xref>] is taken as a basis for this research since it is one of the most commonly used in biomechanics. It consists of using the anatomical points of bony eminences depending on the movement analysis that must be analyzed. The fifteen points obtained from the skeleton that are captured in the camera are stored in a &quot;Capture&quot; table with the structure (node, PosX, PosY, PosZ); they are then consulted during the training and classification process.</p>
			<p>The classifier starts with a capture process using the Kinect camera for an initial calibration; it then starts the training process so that the classifier interprets the data input using the Kinect camera (see <xref ref-type="fig" rid="f8">Fig. 8</xref>).</p>
			<p>
				<fig id="f8">
					<label>Figure 8</label>
					<caption>
						<title>Use case diagram</title>
					</caption>
					<graphic xlink:href="0012-7353-dyna-85-205-00256-gf8.png"/>
					<attrib><bold>Source:</bold> Authors</attrib>
				</fig>
			</p>
			<p>The general model begins with capturing the information through the Skeletal Tracking´s 15 points that are generated by Kinect; the information is then stored on a database created in Postgres where the classifier can consult it. For the training process, we created a file.arff with a specific structure to be verified by the Weka library (see <xref ref-type="fig" rid="f9">Fig. 9</xref>). </p>
			<p>
				<fig id="f9">
					<label>Figure 9</label>
					<caption>
						<title>ARFF file Structure </title>
					</caption>
					<graphic xlink:href="0012-7353-dyna-85-205-00256-gf9.jpg"/>
					<attrib><bold>Source:</bold> Authors</attrib>
				</fig>
			</p>
			<p>Weka captures information and generates a classification model according to the selected algorithm (SVM, KNN, NB) and creates the final classification according to the Skeletal Tracking input information based on the proposed model (see <xref ref-type="fig" rid="f10">Fig. 10</xref>).</p>
			<p>
				<fig id="f10">
					<label>Figure 10</label>
					<caption>
						<title>Posture characterization proposed model.</title>
					</caption>
					<graphic xlink:href="0012-7353-dyna-85-205-00256-gf10.jpg"/>
					<attrib><bold>Source:</bold> Authors</attrib>
				</fig>
			</p>
			<p>For the model´s tests, 17,882 training data were used from fourteen randomly selected people who simulated basic emotions (astonishment, anger, happiness, and sadness). The coordinates xi, yi, and zi (0 &lt;i &lt;14), depend on the size and position of the person in the scene, and the following feature vector was identified:</p>
			<p>[x0,y0,z0,x1,y1,z1,x2,y2,z2,x2,y2,z3,x3,y3,z3,x4,y4,z4,x5,y5,z5,x6,y6,z6,x7,y7,z7]</p>
			<p>For the classifiers analysis tests, five different types of training data were used (see <xref ref-type="table" rid="t2">Table 2</xref>), The first included ten random samples and had important results with a KNN that was 86.1928% effective. The second had 20% training data, and it was shown that the KNN algorithm is 89.0466% effective, which significantly surpasses the other classifiers. In the other training tests (using 40%, 60%, and 80% of total data for training) the KNN algorithm remains above the rest of the classification algorithms. </p>
			<p>
				<table-wrap id="t1">
					<label>Table 1</label>
					<caption>
						<title>Distribution of data on emotions for training purposes.</title>
					</caption>
					<graphic xlink:href="0012-7353-dyna-85-205-00256-gt1.jpg"/>
					<table-wrap-foot>
						<fn id="TFN1">
							<p><bold>Source:</bold> Authors</p>
						</fn>
					</table-wrap-foot>
				</table-wrap>
			</p>
			<p>
				<table-wrap id="t2">
					<label>Table 2</label>
					<caption>
						<title>Performance of the classifiers.</title>
					</caption>
					<graphic xlink:href="0012-7353-dyna-85-205-00256-gt2.jpg"/>
					<table-wrap-foot>
						<fn id="TFN2">
							<p><bold>Source:</bold> Authors</p>
						</fn>
					</table-wrap-foot>
				</table-wrap>
			</p>
		</sec>
		<sec sec-type="conclusions">
			<title>5. Conclusions</title>
			<p>This study showed that the KNN algorithm performs more completely than SVM and NB and has a maximum effectiveness of 89.0466% for the set of selected data. It can also be used as a foundation to develop applications that recognize basic emotions (astonishment, anger, happiness, and sadness) using Kinect technology. The algorithms based on SVM and NB have a lower percentage than the KNN, but future studies could consider their effectiveness since this can be improved with unsupervised learning.</p>
			<p>The applied vector of characteristics can vary for classifications of more complex negative emotions, for example ((+) rage, anger, annoyance, (-) apprehension, fear, terror). This means that it would be necessary to identify additional points of the skeleton, which would increase the complexity when recognizing emotions.</p>
			<p>The data results of this investigation will be integrated into a general multimodal sentiment analysis model that considers the tourist area in the department of Boyacá Colombia. It will be merged using a conjunction vector for a final sentiment classification that analyzes data types such as text and images.</p>
			<p>Using free distribution tools for research processes creates a channel of collaborative help communication. This takes advantage of solutions from communities around the world to make contributions to science without the need to reinvent the wheel or have a large budget to carry out a research project. For this research, we used tools including OpenNI, NITE, Weka, Java that helped classify emotions, and the result was a technically functional and economically viable product. </p>
			<p>Human-computer interaction (HCI) has improved over recent years; the Natural User Interfaces (NUI) have created usability solutions where valuable information is stored and can be analyzed to identify users´ emotions. Kinect is not only for use in video games; it can also be applied in various areas of knowledge due to its great potential for innovative hardware and its increasing usefulness in research.</p>
		</sec>
	</body>
	<back>
		<ref-list>
			<title>Referencias</title>
			<ref id="B1">
				<label>[1]</label>
				<mixed-citation>[1]  Mann, S., Intelligent image processing. IEEE, John Wiley &amp; Sons, Inc., 2002. DOI: 10.1002/0471221635</mixed-citation>
				<element-citation publication-type="book">
					<person-group person-group-type="author">
						<name>
							<surname>Mann</surname>
							<given-names>S.</given-names>
						</name>
					</person-group>
					<source>Intelligent image processing</source>
					<publisher-name>John Wiley &amp; Sons, Inc.</publisher-name>
					<year>2002</year>
					<pub-id pub-id-type="doi">10.1002/0471221635</pub-id>
				</element-citation>
			</ref>
			<ref id="B2">
				<label>[2]</label>
				<mixed-citation>[2]  Valli, A., Natural interaction white paper, 2007.</mixed-citation>
				<element-citation publication-type="book">
					<person-group person-group-type="author">
						<name>
							<surname>Valli</surname>
							<given-names>A.</given-names>
						</name>
					</person-group>
					<source>Natural interaction white paper</source>
					<year>2007</year>
				</element-citation>
			</ref>
			<ref id="B3">
				<label>[3]</label>
				<mixed-citation>[3]  Rivera-Mateos, M., El turismo experiencial como forma de turismo responsable e intercultural, en: García-Rodríguez, L., Roldán-Tapía, A.R., Eds., Relac. Intercult. en la Divers., 2013, pp. 199-217.</mixed-citation>
				<element-citation publication-type="book">
					<person-group person-group-type="author">
						<name>
							<surname>Rivera-Mateos</surname>
							<given-names>M.</given-names>
						</name>
					</person-group>
					<chapter-title>El turismo experiencial como forma de turismo responsable e intercultural</chapter-title>
					<person-group person-group-type="author">
						<name>
							<surname>García-Rodríguez</surname>
							<given-names>L.</given-names>
						</name>
					</person-group>
					<person-group person-group-type="editor">
						<name>
							<surname>Roldán-Tapía</surname>
							<given-names>A.R.</given-names>
						</name>
					</person-group>
					<source>Relac. Intercult. en la Divers</source>
					<year>2013</year>
					<fpage>199</fpage>
					<lpage>217</lpage>
				</element-citation>
			</ref>
			<ref id="B4">
				<label>[4]</label>
				<mixed-citation>[4]  Smith, W.L., Experiential tourism around the world and at home: definitions and standards, Int. J. Serv. Stand., 2(1), 1 P, 2006. DOI: 10.1504/IJSS.2006.008156</mixed-citation>
				<element-citation publication-type="book">
					<person-group person-group-type="author">
						<name>
							<surname>Smith</surname>
							<given-names>W.L.</given-names>
						</name>
					</person-group>
					<source>Experiential tourism around the world and at home: definitions and standards</source>
					<source>Int. J. Serv. Stand.</source>
					<volume>2</volume>
					<issue>1</issue>
					<fpage>1</fpage>
					<lpage>1</lpage>
					<year>2006</year>
					<pub-id pub-id-type="doi">10.1504/IJSS.2006.008156</pub-id>
				</element-citation>
			</ref>
			<ref id="B5">
				<label>[5]</label>
				<mixed-citation>[5]  Dale, R., Moisl, H. and Somers, H.L., Handbook of natural language processing, Marcel Dekker, 2000.</mixed-citation>
				<element-citation publication-type="book">
					<person-group person-group-type="author">
						<name>
							<surname>Dale</surname>
							<given-names>R.</given-names>
						</name>
						<name>
							<surname>Moisl</surname>
							<given-names>H.</given-names>
						</name>
						<name>
							<surname>Somers</surname>
							<given-names>H.L.</given-names>
						</name>
					</person-group>
					<source>Handbook of natural language processing</source>
					<publisher-name>Marcel Dekker</publisher-name>
					<year>2000</year>
				</element-citation>
			</ref>
			<ref id="B6">
				<label>[6]</label>
				<mixed-citation>[6]  Cambria, E. and Hussain, A., Sentic album: content-, concept-, and context-based online personal photo management system, Cognit. Comput., 4(4), pp. 477-496, 2012. DOI: 10.1007/s12559-012-9145-4ch</mixed-citation>
				<element-citation publication-type="journal">
					<person-group person-group-type="author">
						<name>
							<surname>Cambria</surname>
							<given-names>E.</given-names>
						</name>
						<name>
							<surname>Hussain</surname>
							<given-names>A.</given-names>
						</name>
					</person-group>
					<article-title>Sentic album: content-, concept-, and context-based online personal photo management system</article-title>
					<source>Cognit. Comput.</source>
					<volume>4</volume>
					<issue>4</issue>
					<fpage>477</fpage>
					<lpage>496</lpage>
					<year>2012</year>
					<pub-id pub-id-type="doi">10.1007/s12559-012-9145-4ch</pub-id>
				</element-citation>
			</ref>
			<ref id="B7">
				<label>[7]</label>
				<mixed-citation>[7]  Simon-Kemp, W.A.S., Digital in 2016, 2016.</mixed-citation>
				<element-citation publication-type="book">
					<person-group person-group-type="author">
						<name>
							<surname>Simon-Kemp</surname>
							<given-names>W.A.S.</given-names>
						</name>
					</person-group>
					<source>Digital in 2016</source>
					<year>2016</year>
				</element-citation>
			</ref>
			<ref id="B8">
				<label>[8]</label>
				<mixed-citation>[8]  Cambria, E., Livingstone, A. and Hussain, A. The Hourglass of Emotions. In: Esposito, A., Esposito, A.M., Vinciarelli, A., Hoffmann, R. and Müller, V.C., (eds.), Cognitive Behavioural Systems. Lecture Notes in Computer Science, vol 7403. Springer, Berlin, Heidelberg. 2012. DOI: 10.1007/978-3-642-34584-5_11</mixed-citation>
				<element-citation publication-type="book">
					<person-group person-group-type="author">
						<name>
							<surname>Cambria</surname>
							<given-names>E.</given-names>
						</name>
						<name>
							<surname>Livingstone</surname>
							<given-names>A.</given-names>
						</name>
						<name>
							<surname>Hussain</surname>
							<given-names>A.</given-names>
						</name>
					</person-group>
					<chapter-title>The Hourglass of Emotions</chapter-title>
					<person-group person-group-type="editor">
						<name>
							<surname>Esposito</surname>
							<given-names>A.</given-names>
						</name>
						<name>
							<surname>Esposito</surname>
							<given-names>A.M.</given-names>
						</name>
						<name>
							<surname>Vinciarelli</surname>
							<given-names>A.</given-names>
						</name>
						<name>
							<surname>Hoffmann</surname>
							<given-names>R.</given-names>
						</name>
						<name>
							<surname>Müller</surname>
							<given-names>V.C.</given-names>
						</name>
					</person-group>
					<source>Cognitive Behavioural Systems</source>
					<volume>7403</volume>
					<publisher-name>Springer</publisher-name>
					<publisher-loc>Berlin</publisher-loc>
					<year>2012</year>
					<pub-id pub-id-type="doi">10.1007/978-3-642-34584-5_11</pub-id>
				</element-citation>
			</ref>
			<ref id="B9">
				<label>[9]</label>
				<mixed-citation>[9]  Minsky, M., The emotion machine: commonsense thinking, artificial intelligence, and the future of the human mind, 2007.</mixed-citation>
				<element-citation publication-type="book">
					<person-group person-group-type="author">
						<name>
							<surname>Minsky</surname>
							<given-names>M.</given-names>
						</name>
					</person-group>
					<source>The emotion machine: commonsense thinking, artificial intelligence, and the future of the human mind</source>
					<year>2007</year>
				</element-citation>
			</ref>
			<ref id="B10">
				<label>[10]</label>
				<mixed-citation>[10]  Vesterinen, E., Affective computing. Pattern Analysis and Applications, 1(1), pp. 71-73, 1998.</mixed-citation>
				<element-citation publication-type="journal">
					<person-group person-group-type="author">
						<name>
							<surname>Vesterinen</surname>
							<given-names>E.</given-names>
						</name>
					</person-group>
					<article-title>Affective computing</article-title>
					<source>Pattern Analysis and Applications</source>
					<volume>1</volume>
					<issue>1</issue>
					<fpage>71</fpage>
					<lpage>73</lpage>
					<year>1998</year>
				</element-citation>
			</ref>
			<ref id="B11">
				<label>[11]</label>
				<mixed-citation>[11]  Siegman, A.W. and Feldstein, S., Nonverbal behavior and communication. L. Erlbaum, 1987.</mixed-citation>
				<element-citation publication-type="book">
					<person-group person-group-type="author">
						<name>
							<surname>Siegman</surname>
							<given-names>A.W.</given-names>
						</name>
						<name>
							<surname>Feldstein</surname>
							<given-names>S.</given-names>
						</name>
					</person-group>
					<source>Nonverbal behavior and communication</source>
					<publisher-name>L. Erlbaum</publisher-name>
					<year>1987</year>
				</element-citation>
			</ref>
			<ref id="B12">
				<label>[12]</label>
				<mixed-citation>[12]  Pons, C., Comunicación no verbal. Barcelona: Editorial Kairós, 2015.</mixed-citation>
				<element-citation publication-type="book">
					<person-group person-group-type="author">
						<name>
							<surname>Pons</surname>
							<given-names>C.</given-names>
						</name>
					</person-group>
					<source>Comunicación no verbal</source>
					<publisher-loc>Barcelona</publisher-loc>
					<publisher-name>Editorial Kairós</publisher-name>
					<year>2015</year>
				</element-citation>
			</ref>
			<ref id="B13">
				<label>[13]</label>
				<mixed-citation>[13]  Kaur, A. and Gupta, V., A survey on sentiment analysis and opinion mining techniques, J. Emerg. Technol. Web Intell., 5(4), pp. 367-371, 2013. DOI: 10.4304/jetwi.5.4.367-3</mixed-citation>
				<element-citation publication-type="journal">
					<person-group person-group-type="author">
						<name>
							<surname>Kaur</surname>
							<given-names>A.</given-names>
						</name>
						<name>
							<surname>Gupta</surname>
							<given-names>V.</given-names>
						</name>
					</person-group>
					<article-title>A survey on sentiment analysis and opinion mining techniques</article-title>
					<source>J. Emerg. Technol. Web Intell.</source>
					<volume>5</volume>
					<issue>4</issue>
					<fpage>367</fpage>
					<lpage>371</lpage>
					<year>2013</year>
					<pub-id pub-id-type="doi">10.4304/jetwi.5.4.367-3</pub-id>
				</element-citation>
			</ref>
			<ref id="B14">
				<label>[14]</label>
				<mixed-citation>[14]  Gamon, M., Aue, A., Corston-Oliver, S. and Ringger, E., Pulse: mining customer opinions from free text, Springer, Berlin , Heidelberg, 2005, pp. 121-132. DOI: 10.1007/11552253_12 </mixed-citation>
				<element-citation publication-type="book">
					<person-group person-group-type="author">
						<name>
							<surname>Gamon</surname>
							<given-names>M.</given-names>
						</name>
						<name>
							<surname>Aue</surname>
							<given-names>A.</given-names>
						</name>
						<name>
							<surname>Corston-Oliver</surname>
							<given-names>S.</given-names>
						</name>
						<name>
							<surname>Ringger</surname>
							<given-names>E.</given-names>
						</name>
					</person-group>
					<source>Pulse: mining customer opinions from free text</source>
					<publisher-name>Springer</publisher-name>
					<publisher-loc>Berlin</publisher-loc>
					<year>2005</year>
					<fpage>121</fpage>
					<lpage>132</lpage>
					<pub-id pub-id-type="doi">10.1007/11552253_12</pub-id>
				</element-citation>
			</ref>
			<ref id="B15">
				<label>[15]</label>
				<mixed-citation>[15]  Poria, S., Cambria, E., Hussain, A. and Bin Huang, G., Towards an intelligent framework for multimodal affective data analysis, Neural Networks, 63, pp. 104-116, 2015. DOI: 10.1016/j.neunet.2014.10.005 </mixed-citation>
				<element-citation publication-type="journal">
					<person-group person-group-type="author">
						<name>
							<surname>Poria</surname>
							<given-names>S.</given-names>
						</name>
						<name>
							<surname>Cambria</surname>
							<given-names>E.</given-names>
						</name>
						<name>
							<surname>Hussain</surname>
							<given-names>A.</given-names>
						</name>
						<name>
							<surname>Bin Huang</surname>
							<given-names>G.</given-names>
						</name>
					</person-group>
					<article-title>Towards an intelligent framework for multimodal affective data analysis</article-title>
					<source>Neural Networks</source>
					<issue>63</issue>
					<fpage>104</fpage>
					<lpage>116</lpage>
					<year>2015</year>
					<pub-id pub-id-type="doi">10.1016/j.neunet.2014.10.005</pub-id>
				</element-citation>
			</ref>
			<ref id="B16">
				<label>[16]</label>
				<mixed-citation>[16]  Kapoor, A., Burleson, W. and Picard, R.W., Automatic prediction of frustration, Int. J. Hum. Comput. Stud., 65(8), pp. 724-736, 2007. DOI: 10.1016/J.IJHCS.2007.02.003 </mixed-citation>
				<element-citation publication-type="journal">
					<person-group person-group-type="author">
						<name>
							<surname>Kapoor</surname>
							<given-names>A.</given-names>
						</name>
						<name>
							<surname>Burleson</surname>
							<given-names>W.</given-names>
						</name>
						<name>
							<surname>Picard</surname>
							<given-names>R.W.</given-names>
						</name>
					</person-group>
					<article-title>Automatic prediction of frustration</article-title>
					<source>Int. J. Hum. Comput. Stud.</source>
					<volume>65</volume>
					<issue>8</issue>
					<fpage>724</fpage>
					<lpage>736</lpage>
					<year>2007</year>
					<pub-id pub-id-type="doi">10.1016/J.IJHCS.2007.02.003</pub-id>
				</element-citation>
			</ref>
			<ref id="B17">
				<label>[17]</label>
				<mixed-citation>[17]  Lisetti, C.L., Pattern Analysis &amp; Applic, 1, J. Wiley, 1998, 71 P. DOI: 10.1007/BF01238028</mixed-citation>
				<element-citation publication-type="book">
					<person-group person-group-type="author">
						<name>
							<surname>Lisetti</surname>
							<given-names>C.L.</given-names>
						</name>
					</person-group>
					<source>Pattern Analysis &amp; Applic</source>
					<volume>1</volume>
					<publisher-name>J. Wiley</publisher-name>
					<year>1998</year>
					<size units="pages">71</size>
					<pub-id pub-id-type="doi">10.1007/BF01238028</pub-id>
				</element-citation>
			</ref>
			<ref id="B18">
				<label>[18]</label>
				<mixed-citation>[18]  Yan, J., Zheng, W., Xu, Q., Lu, G., Li, H. and Wang, B., Sparse Kernel reduced-rank regression for bimodal emotion recognition from facial expression and speech, IEEE Trans. Multimed., 18(7), pp. 1319-1329, 2016. DOI: 10.1109/TMM.2016.2557721 </mixed-citation>
				<element-citation publication-type="journal">
					<person-group person-group-type="author">
						<name>
							<surname>Yan</surname>
							<given-names>J.</given-names>
						</name>
						<name>
							<surname>Zheng</surname>
							<given-names>W.</given-names>
						</name>
						<name>
							<surname>Xu</surname>
							<given-names>Q.</given-names>
						</name>
						<name>
							<surname>Lu</surname>
							<given-names>G.</given-names>
						</name>
						<name>
							<surname>Li</surname>
							<given-names>H.</given-names>
						</name>
						<name>
							<surname>Wang</surname>
							<given-names>B.</given-names>
						</name>
					</person-group>
					<article-title>Sparse Kernel reduced-rank regression for bimodal emotion recognition from facial expression and speech</article-title>
					<source>IEEE Trans. Multimed.</source>
					<volume>18</volume>
					<issue>7</issue>
					<fpage>1319</fpage>
					<lpage>1329</lpage>
					<year>2016</year>
					<pub-id pub-id-type="doi">10.1109/TMM.2016.2557721</pub-id>
				</element-citation>
			</ref>
			<ref id="B19">
				<label>[19]</label>
				<mixed-citation>[19]  Poria, S., Cambria, E., Howard, N., Bin Huang, G. and Hussain, A., Fusing audio, visual and textual clues for sentiment analysis from multimodal content, Neurocomputing, 174, pp. 50-59, 2016. DOI: 10.1016/j.neucom.2015.01.095 </mixed-citation>
				<element-citation publication-type="journal">
					<person-group person-group-type="author">
						<name>
							<surname>Poria</surname>
							<given-names>S.</given-names>
						</name>
						<name>
							<surname>Cambria</surname>
							<given-names>E.</given-names>
						</name>
						<name>
							<surname>Howard</surname>
							<given-names>N.</given-names>
						</name>
						<name>
							<surname>Bin Huang</surname>
							<given-names>G.</given-names>
						</name>
						<name>
							<surname>Hussain</surname>
							<given-names>A.</given-names>
						</name>
					</person-group>
					<article-title>Fusing audio, visual and textual clues for sentiment analysis from multimodal content</article-title>
					<source>Neurocomputing</source>
					<issue>174</issue>
					<fpage>50</fpage>
					<lpage>59</lpage>
					<year>2016</year>
					<pub-id pub-id-type="doi">10.1016/j.neucom.2015.01.095</pub-id>
				</element-citation>
			</ref>
			<ref id="B20">
				<label>[20]</label>
				<mixed-citation>[20]  Benhumea, H.S., Interfaz de lenguaje natural usando Kinect. Unidad Zacatenco, 2012.</mixed-citation>
				<element-citation publication-type="book">
					<person-group person-group-type="author">
						<name>
							<surname>Benhumea</surname>
							<given-names>H.S.</given-names>
						</name>
					</person-group>
					<source>Interfaz de lenguaje natural usando Kinect</source>
					<publisher-name>Unidad Zacatenco</publisher-name>
					<year>2012</year>
				</element-citation>
			</ref>
			<ref id="B21">
				<label>[21]</label>
				<mixed-citation>[21]  Zeev-Zalevsky, J.G., Shpunt, A. and Maizels, A., Method and system for object reconstruction [online]. [date of reference: Sept. 04th of 2016]. Available at: <comment>Available at: <ext-link ext-link-type="uri" xlink:href="https://patents.google.com/patent/US20170004623">https://patents.google.com/patent/US20170004623</ext-link>
					</comment>
				</mixed-citation>
				<element-citation publication-type="webpage">
					<person-group person-group-type="author">
						<name>
							<surname>Zeev-Zalevsky</surname>
							<given-names>J.G.</given-names>
						</name>
						<name>
							<surname>Shpunt</surname>
							<given-names>A.</given-names>
						</name>
						<name>
							<surname>Maizels</surname>
							<given-names>A.</given-names>
						</name>
					</person-group>
					<source>Method and system for object reconstruction</source>
					<date-in-citation content-type="access-date" iso-8601-date="2016-09-04">Sept. 04th of 2016</date-in-citation>
					<comment>Available at: <ext-link ext-link-type="uri" xlink:href="https://patents.google.com/patent/US20170004623">https://patents.google.com/patent/US20170004623</ext-link>
					</comment>
				</element-citation>
			</ref>
			<ref id="B22">
				<label>[22]</label>
				<mixed-citation>[22]  Grafsgaard, J.F., Fulton, R.M., Boyer, K.E., Wiebe, E.N. and Lester, J.C., Multimodal analysis of the implicit affective channel in computer-mediated textual communication, Proc. 14th ACM Int. Conf. Multimodal Interact., pp. 145-152, 2012. DOI: 10.1145/2388676.2388708 </mixed-citation>
				<element-citation publication-type="confproc">
					<person-group person-group-type="author">
						<name>
							<surname>Grafsgaard</surname>
							<given-names>J.F.</given-names>
						</name>
						<name>
							<surname>Fulton</surname>
							<given-names>R.M.</given-names>
						</name>
						<name>
							<surname>Boyer</surname>
							<given-names>K.E.</given-names>
						</name>
						<name>
							<surname>Wiebe</surname>
							<given-names>E.N.</given-names>
						</name>
						<name>
							<surname>Lester</surname>
							<given-names>J.C.</given-names>
						</name>
					</person-group>
					<source>Multimodal analysis of the implicit affective channel in computer-mediated textual communication</source>
					<conf-name>14thACM Int. Conf. Multimodal Interact.</conf-name>
					<fpage>145</fpage>
					<lpage>152</lpage>
					<year>2012</year>
					<pub-id pub-id-type="doi">10.1145/2388676.2388708</pub-id>
				</element-citation>
			</ref>
			<ref id="B23">
				<label>[23]</label>
				<mixed-citation>[23]  Patwardhan, A.S. and Knapp, G.M., Multimodal affect analysis for product feedback assessment, 2013, pp. 178-187.</mixed-citation>
				<element-citation publication-type="book">
					<person-group person-group-type="author">
						<name>
							<surname>Patwardhan</surname>
							<given-names>A.S.</given-names>
						</name>
						<name>
							<surname>Knapp</surname>
							<given-names>G.M.</given-names>
						</name>
					</person-group>
					<source>Multimodal affect analysis for product feedback assessment</source>
					<year>2013</year>
					<fpage>178</fpage>
					<lpage>187</lpage>
				</element-citation>
			</ref>
			<ref id="B24">
				<label>[24]</label>
				<mixed-citation>[24]  Choubik, Y. and Mahmoudi, A., Machine learning for real time poses classification using kinect skeleton data, in: 13th International Conference on Computer Graphics, Imaging and Visualization (CGiV), 2016, pp. 307-311. DOI: 10.1109/CGiV.2016.66 </mixed-citation>
				<element-citation publication-type="confproc">
					<person-group person-group-type="author">
						<name>
							<surname>Choubik</surname>
							<given-names>Y.</given-names>
						</name>
						<name>
							<surname>Mahmoudi</surname>
							<given-names>A.</given-names>
						</name>
					</person-group>
					<source>Machine learning for real time poses classification using kinect skeleton data</source>
					<conf-name>13thInternational Conference on Computer Graphics, Imaging and Visualization (CGiV)</conf-name>
					<conf-date>2016</conf-date>
					<fpage>307</fpage>
					<lpage>311</lpage>
					<pub-id pub-id-type="doi">10.1109/CGiV.2016.66</pub-id>
				</element-citation>
			</ref>
			<ref id="B25">
				<label>[25]</label>
				<mixed-citation>[25]  Shum, H.P.H., Ho, E.S.L., Jiang, Y. and Takagi, S., Real-time posture reconstruction for Microsoft Kinect, IEEE Trans. Cybern., 43(5), pp. 1357-1369, 2013. DOI: 10.1109/TCYB.2013.2275945</mixed-citation>
				<element-citation publication-type="journal">
					<person-group person-group-type="author">
						<name>
							<surname>Shum</surname>
							<given-names>H.P.H.</given-names>
						</name>
						<name>
							<surname>Ho</surname>
							<given-names>E.S.L.</given-names>
						</name>
						<name>
							<surname>Jiang</surname>
							<given-names>Y.</given-names>
						</name>
						<name>
							<surname>Takagi</surname>
							<given-names>S.</given-names>
						</name>
					</person-group>
					<article-title>Real-time posture reconstruction for Microsoft Kinect</article-title>
					<source>IEEE Trans. Cybern.</source>
					<volume>43</volume>
					<issue>5</issue>
					<fpage>1357</fpage>
					<lpage>1369</lpage>
					<year>2013</year>
					<pub-id pub-id-type="doi">10.1109/TCYB.2013.2275945</pub-id>
				</element-citation>
			</ref>
			<ref id="B26">
				<label>[26]</label>
				<mixed-citation>[26]  Davis, R.B., Ounpuu, S., Tyburski, D. and Gage, J.R., A gait analysis data collection and reduction technique, Hum. Mov. Sci., 10(5), pp. 575-587, 1991. DOI: 10.1016/0167-9457(91)90046-Z</mixed-citation>
				<element-citation publication-type="journal">
					<person-group person-group-type="author">
						<name>
							<surname>Davis</surname>
							<given-names>R.B.</given-names>
						</name>
						<name>
							<surname>Ounpuu</surname>
							<given-names>S.</given-names>
						</name>
						<name>
							<surname>Tyburski</surname>
							<given-names>D.</given-names>
						</name>
						<name>
							<surname>Gage</surname>
							<given-names>J.R.</given-names>
						</name>
					</person-group>
					<article-title>A gait analysis data collection and reduction technique</article-title>
					<source>Hum. Mov. Sci.</source>
					<volume>10</volume>
					<issue>5</issue>
					<fpage>575</fpage>
					<lpage>587</lpage>
					<year>1991</year>
					<pub-id pub-id-type="doi">10.1016/0167-9457(91)90046-Z</pub-id>
				</element-citation>
			</ref>
		</ref-list>
		<fn-group>
			<fn fn-type="other" id="fn0">
				<label>How to cite:</label>
				<p> Monsalve-Pulido, J.A. and Parra-Rodríguez, C.A., Characterization of postures to analyze people’s emotions using Kinect technology. DYNA, 85(205), pp. 256-263, June, 2018.</p>
			</fn>
		</fn-group>
		<fn-group>
			<fn fn-type="other" id="fn1">
				<label>1</label>
				<p>
					<ext-link ext-link-type="uri" xlink:href="https://www.slideshare.net/wearesocialsg/digital-in-2016">https://www.slideshare.net/wearesocialsg/digital-in-2016</ext-link>
				</p>
			</fn>
		</fn-group>
		<fn-group>
			<fn fn-type="other" id="fn2">
				<label>2</label>
				<p>
					<ext-link ext-link-type="uri" xlink:href="https://blogs.msdn.microsoft.com/esmsdn/2011/08/09/reto-sdk-de-kinect-detectar-posturas-con-skeletal-tracking/">https://blogs.msdn.microsoft.com/esmsdn/2011/08/09/reto-sdk-de-kinect-detectar-posturas-con-skeletal-tracking/</ext-link>
				</p>
			</fn>
		</fn-group>
		<fn-group>
			<fn fn-type="other" id="fn3">
				<label>3</label>
				<p>
					<ext-link ext-link-type="uri" xlink:href="http://openni.ru/reference-guide/index.html?t=index.html">http://openni.ru/reference-guide/index.html?t=index.html</ext-link>
				</p>
			</fn>
		</fn-group>
		<fn-group>
			<fn fn-type="other" id="fn4">
				<label>4</label>
				<p>
					<ext-link ext-link-type="uri" xlink:href="https://processing.org">https://processing.org</ext-link>
				</p>
			</fn>
		</fn-group>
		<fn-group>
			<fn fn-type="other" id="fn5">
				<label>5</label>
				<p>
					<ext-link ext-link-type="uri" xlink:href="https://www.cs.waikato.ac.nz/ml/weka/">https://www.cs.waikato.ac.nz/ml/weka/</ext-link>
				</p>
			</fn>
		</fn-group>
	</back>
</article>