Articles présentés au CRI’2019

  • Léonie Tamo Mamtio and Gilbert Tindo. An efficient end to end verifiable voting system

Abstract : End-to-end (E2E) verifiability has been widely identified as a critical property for the adoption of e-voting systems in real for electoral procedures. Moreover, one of the pillars of any vote, apart from the secrecy of the vote and the integrity of the result, lies in the transparency of the process, the possibility for the voters “to understand the underlying system” without resorting to the competences techniques. The end-to-end verifiable electronic voting systems proposed in the literature do not always guarantee it because they require additional configuration hypotheses, for example the existence of a trusted third party as a source of random or the existence of a beacon random. In this work, we present a new verifiable end-to-end electronic voting system requiring only the existence of a consistent voting bulletin board. The end-to-end verification property of our system is guaranteed information given the existence of the bulletin board, the involvement of the voters and the political parties in the process. This involvement does not compromise the confidentiality or
integrity of the elections.

  • Rodrigue Konan Tchinda and Clémentin Tayou Djamegni. Enhancing Reasoning with the Extension Rule in CDCL SAT Solvers

Abstract : The extension rule first introduced by G. Tseitin is a simple but powerful rule that, when added to resolution, leads to an exponentially stronger proof system known as extended resolution(ER). Despite the outstanding theoretical results obtained with ER, its exploitation in practice to improve SAT solvers’ efficiency still poses some challenging issues. There have been several attempts in the literature aiming at integrating the extension rule within CDCL SAT solvers but the results are in general not as promising as in theory. An important remark that can be made on these attempts is that most of them focus on reducing the sizes of the proofs using the extended variables introduced in the solver. We adopt in this work a different view. We see extended variables as a means to enhance reasoning in solvers and therefore to give them the ability of reasoning on various semantic aspects of variables. Experiments carried out on the 2018 SAT competition’s benchmarks show the use of the extension rule in CDCL SAT solvers to be practically useful for both satisfiable and unsatisfiable instances.

  • Diane Tchuani Tchakonté, Emmanuel Simeu and Maurice Tchuente. Optimization of wireless sensor network lifetime for target coverage applications

Abstract : In many wireless sensor network applications, the sensor nodes are deployed over an area of interest to monitor a set of points called targets. Extending the network lifetime is a major challenge for these so-called target coverage applications because the nodes are powered by a limited power source. In order to extend the network lifetime, subsets of nodes called cover sets are formed and activated successively for defined durations while the other nodes are in sleep mode. This approach gives rise to an NP-hard problem. Under the assumption that the energy consumption of nodes in sleep mode is negligible, we propose a new greedy heuristic whose idea is to minimize the number of nodes that cover the critical targets in a cover set by using a black list. When a cover set is formed, all of its nodes are activated until one of them is out of power or breaks down, and then a new cover set is formed. According to the simulations carried out, this heuristic gives solutions closer to the optimum than those obtained by the heuristics of the literature.

  • Rodrigue Domga Komguem, Razvan Stanica, Maurice Tchuente and Fabrice Valois. Adaptive message generation in intersection monitoring wireless sensor networks

Abstract : In Intelligent Transportation Systems, sensors can be linearly deployed on lanes at an intersection to measure vehicular traffic for an intelligent traffic lights management perspective. Thus, deployed sensors can generate messages periodically containing the number of detected vehicles, or upon the detection of a passing vehicle. In this paper, a theoretical analysis of the two approaches allows us to highlight their performance in terms of the number of messages generated in the network, which is directly related to the energy consumption of sensors. We analyze real data of vehicular traffic at intersections coming from the city of Cologne, and we show that the best message generation strategy depends on the geographical position of the intersection, the lane at the intersection and also on the period of the day. Finally, based on these observations, we propose an adaptive message generation approach. The proposed approach is based on the local measurement of vehicular traffic and considerably reduces the number of messages generated in the network

  • Olga Kengni Ngangmo, Ado Adamou Abba Ari, Dina Kolyang Taiwe and Mohamadou Alidou. Garanties de confidentialité différentielle de deux algorithmes d’un schéma multi-niveau de publication de données

Abstract : Smart web needs a massive stockage of data from various connected objects in the
cloud. The publication of some of these data, such as medical informations and financial transactions can violate privacy. Then, many anonymization techniques are used to keep the data private. The anonymized data become often unusable. In this paper, we study the guarantees of differential privacy of the algorithms of a multilevel data publishing scheme. The multi-level scheme pertubes the graph structure by adding fake edges, groups the different vertices and permutes the vertices in groups. We present algorithms of the scheme and state whether they are differentially private or not.

  • Fouakeu Tatieze Stéphane, Kamla Vivient Corneille and Ndamlabin Mboula Etienne. Genetic algorithm of centralized meta-scheduling in Cloud Computing

Abstract : Cloud Computing is a parallel, distributed computing system consisting of a collection of interconnected and virtualized computers that are dynamically provided and presented as one or more aggregated computing resources based on service level agreements (SLAs) established through negotiation between the service provider and the consumers. Recent years have seen the massive migration of enterprise applications into cloud computing. One of the challenges the most important of Cloud Computing is the scheduling of tasks ; which should satisfy Cloud users in terms of Quality of Service and increase the profit of cloud providers.
Bio-inspired algorithms (genetics) represent a heuristic research technique that produces effective solutions. In this paper, we propose a meta-sequencing genetic algorithm that focuses on Quality of Service (QoS). The proposed algorithm is based on the requirements of user requests and the availability of resources (Virtual Machines) Cloud Computing to get a better combination as an optimal solution. At the end of this work, theoretically, this genetic algorithm has a better delivery time of the results of the jobs of the clients in terms of exchange of messages.

  • Martin Xavier Tchembe, Maurice Tchoupé Tchendji and Armelle Linda Matene Kakeu. Une approche de génération de réseaux sociaux ad-hoc

Résumé : L’utilisation des réseaux sociaux reste de nos jours encore confinée dans des réseaux à infrastructures comme Internet. Cependant, bien des situations (conférences, fêtes foraines, etc) peuvent nécessiter la mise en œuvre et le déploiement rapide d’une application Ad-hoc de dissémination d’informations entre participants à une activité : c’est ce type d’applications que nous appelons Réseau Social ad-hoc. Ces applications sont déployables sur des unités mobiles se déplaçant de façon libre et arbitraire, distribuées, etc. Elles partagent donc inéluctablement les mêmes caractéristiques que celles inhérentes aux réseaux mobiles ad-hoc et font par conséquent de ceux-ci de bons candidats pour leur exploitation. Dans ce papier, en faisant usage des techniques et outils issues du domaine de la programmation générative, nous proposons une démarche de production des environnements de génération de telles applications à partir de leurs spécifications dans un langage dédié. Par application de cette démarche, nous avons développé SMGenerator, un environnement de génération des applications mobiles de type réseaux sociaux ad-hoc déployables sur des terminaux Android. Bien plus, à partir de cette plate-forme, nous avons généré aisément l’application ConfInfo : un réseau social ad-hoc de dissémination d’informations aux participants d’une manifestation scientifique.

  • Soh Calvin Talle, Vivient Corneille Kamla and Jean Etienne Ndamlabin Mboula. A multi-agent distributed meta-scheduling model of micro-cloud based on acquaintances and double auctions – Communication

Abstract : The requirements of the Internet of Things applications (IoT) as well as platform architectures to manage them are still being explored. IoT is defined as a paradigm that transforms physical objects into interconnected intelligent objects via the internet. Today, IoT objects offer built-in intelligence that can be powerful when fully integrated in a collective way to meet the needs of users. Micro-clouds are a new way of proposing to highlight the collective intelligence of IoT objects. It should be noted that the more the size of a micro-cloud increases, the more its complexity and its performance increases, hence the need for decentralization where resources evolve over time without any prediction. This article provides an innovative model for scheduling and dynamically allocating resources in a distributed micro-cloud environment to ensure quality of service from the end-user’s point of view. Our model shows the advantage on the one hand of the resource allocation mode by network of acquaintances, and on the other hand of the sale to the double auctions during the decision making.

  • Milliam Maxime Zekeng Ndadji, Maurice Tchoupe and Didier Parigot. A Projection-Stable Grammatical Model to Specify Workflows for their P2P and Artifact-Centric Execution

Abstract : In this manuscript, we are interested in the specification and decentralized execution of administrative workflows. We present a grammatical model to specify such processes by indicating, in addition to their fundamental elements, the permissions (reading, writing and execution) of each actor in relation to each of the tasks that compose them. We then present a decentralized and artifact-centric execution model of these processes, on a Peer to Peer (P2P) Workflow Management System (WfMS). Our execution model allows the confidential execution of certain tasks by ensuring that, each actor potentially has only a partial perception of the overall process execution status. In our approach, we propose various stable projection algorithms, making it possible to obtain, to verify the coherence and to guarantee the convergence of the various potentially partial perceptions. Our algorithms are then coded and tested using a graphical tool to simulate the decentralized execution of administrative processes.

  • Willy Kengne Kungne, Georges-Edouard Kouamou and Claude Tangha. Extending an artifact-driven workflow model to service composition

Abstract : Traditionally, the service composition languages rely on process oriented workflow of whom the well-known are BPEL4WS, WS-CDL or SCA. They are imperative and focused on how composite services must be constructed, therefore they are rigid to change at runtime. With the advent of the artifact-driven workflows, we intent to exploit one of the proposed models in this case, the Guarded Attributed Grammars (GAG) model, for the services composition in order to highlight its properties for the composition of services.

  • Gérard Nzebop Ndenoka, Tchuente Maurice and Emmanuel Simeu. Langage et sémantique des expressions pour la synthèse de modèle Grafcet dans un environnement IDM

Résumé : Le GRAphe Fonctionnel de Commande Etapes Transitions (GRAFCET) est un puissant langage de modélisation graphique pour la spécification de contrôleurs dans des systèmes à événements discrets. Il fait usage des expressions pour exprimer les conditions de franchissement des transitions et des actions conditionnelles ainsi que les expressions logiques et arithmétiques assignées aux actions stockées. Cependant, de nombreux travaux se sont penchés sur la transformation de spécifications Grafcet (y compris les expressions) en code de contrôle pour systèmes embarqués. Pour faciliter l’édition de modèles Grafcet valides et la génération du code de contrôle, il est judicieux de proposer une formalisation du langage des expressions Grafcet, permettant de valider ses constructions et d’en pourvoir une sémantique appropriée. Pour cela, nous proposons une grammaire hors-contexte qui génère tout l’ensemble des expressions Grafcet, en étendant les grammaires usuelles des expressions arithmétiques et logiques. Nous proposons également un métamodèle et une sémantique associée des expressions Grafcet pour faciliter la mise en oeuvre du langage Grafcet sous la forme d’un parseur des expressions Grafcet G7Expr obtenu grâce au générateur d’analyseurs syntaxiques ANTLR, alors que le métamodèle est mis en oeuvre dans l’environnement d’Ingénierie Dirigée par les Modèles (IDM) Eclipse EMF. L’association des deux outils permet d’analyser et de construire automatiquement les expressions Grafcet lors de l’édition et la synthèse des modèles Grafcet.

  • Jean-Baptiste Bogneh Noussi, Maurice Tchoupé Tchendji and Sylvain Iloga. Parallel HMM-based similarity between finite sets of histograms

Abstract : Histogram comparison is nowadays of major interest in various domains such as Data Mining and Machine Learning, and much work has already been done on this issue since many decades. In most of these work, histograms are handled as simple vectors, therefore their visual shapes are neglected. To overcome this limitation, an accurate similarity measure between two finite sets of histograms has been recently proposed in 2018. In that work, the visual shapes of the histograms are captured into Hidden Markov models. These models are later compared to derive the similarity. But this measure is highly time consuming for some applications such as color image comparison. This paper aims at reducing this time cost through a two-level parallel implementation of this measure. At the first level, the two sets of histograms are concurrently handled, while a parallel version of the Baum-Welch algorithm is executed to train the model associated with each set at the second level. Experiments realized on the same color images as in the initial work exhibited an average speed-up of 7.42 with a standard deviation of 0.35 on a cluster composed of 8 machines.

  • Germes Obiang and Norbert Tsopze. Extraction des caractéristiques lexico-grammaticales et couplage des unités CRF (Conditional Random Field) au réseau de neurones profond pour l’extraction des aspects

Abstract : The Internet contains a wealth of information in the form of unstructured texts such as customer comments on products, events and more. By extracting and analyzing the opinions expressed in customer comments in detail, it is possible to obtain valuable opportunities and information for customers and companies. The model proposed by Jebbara et al. for the extraction of aspects, winner of the SemEval2016 competition, suffers from the absence of lexico-grammatic input characteristics and poor performance in the detection of compound aspects. We propose the model based on a recurrent neural network for the task of extracting aspects of an entity for sentiment analysis. The proposed BiGRU-CRF is an improvement of the Jebbara model. The modification consists in adding a CRF to take into account the dependencies between labels and we have extended the characteristics space by adding grammatical level characteristics and lexical level characteristics. Experiments on the two SemEval2016 data sets validated our approach and showed an improvement in the F-score measurement of about 3.5%.

  • Clarisse Reine Djamkou Kameni and Norbert Tsopze. Approche basée sur les règles d’association pour la prise ne compte des dépendences entre les classes dans un problème de classification multi-label

Résumé : La classification multi-label consiste à associer à un exemple une ou plusieurs classes. Dans de nombreuses situations réelles, les classes ne sont pas indépendantes. L’approche chaîne de classifieurs (\textit{Classifiers Chain}) a été introduite pour tenir compte de cela, mais elle souffre en plus de la propagation d’erreur, du fait de la non assurance de l’existence réelle de la corrélation entre étiquettes. Nous proposons dans ce travail une approche basée sur les règles d’associations pour tenir compte des corrélations entre étiquettes dans la chaîne de classifieurs. C’est ainsi que, nous déterminons au préalable les associations entre une étiquette cible et les autres étiquettes; et seules les étiquettes associées sont prises en compte dans la construction du classifieur pour cette cible. Les résultats expérimentaux menés sur la classication des CVs montrent que le modèle proposé améliore ceux de la chaine de classifieur.

  • Michael Franklin Mbouopda and Paulin Melatagia Yonta A Word Representation to Improve Named Entity Recognition in Low-resource Languages.

Abstract : Named Entity Recognition (NER) is a fundamental task in many NLP applications that seek to identify and classify expressions such as people, location, and organization names. Many NER systems have been developed, but the annotated data needed for good performances are not available for low-resource languages, such as Cameroonian languages. In this paper we exploit the low frequency of named entities in text to define a new suitable cross-lingual distributional representation for named entity recognition. We build the first Ewondo (a Bantu low-resource language of Cameroon) named entities recognizer by projecting named entity tags from English using our word representation. In terms of Recall, Precision and F-score, the obtained results show the effectiveness of the proposed distributional representation of words.

  • Wong Caroline Felicite, Kamla Vivient Corneille and Obaya Mureille Laure. Agent-based coordination protocol at a T-junction

Abstract : The transport of people and goods has long been confronted with congestion, which is characterized by two or more groups of rival management vehicles that wish to access at a given moment the same portion of the road (critical zone). A crossroads is characterized by the meeting of several streets, determining entrance and exit corridors and the number of conflict points. However, crossroads are not specific to road transport networks, but railways and even in the industrial production chains of companies. Moreover, congestion leads to the formation of queues with excessive delays corresponding to the increase in the number of vehicles in circulation and the need for ever-increasing displacement, the overconsumption of energy and even the pollution of the road in the intersections. Thus, to reduce or minimize this congestion, several coordination mechanisms have been proposed either by reducing the collision rate at the points of conflict of a junction or by limiting the congestion caused by the relatively long waiting time to access the shared resource of the crossroads this by intervening on the regulation of the signal lights or on the displacement of the entities in movement. Despite these solutions, the collision rate or average latency of vehicles at junctions is still considerable. In this paper, we propose a hub coordination protocol based on the dynamic priority and the urgency of the convoys on the entrance lanes of a junction. Thus we set up a convoy construction policy and a dynamic priority function based on the convoy’s stay time, its length, its dynamic priority and the individual urgency of the constituent entities.

  • Azanzi Jiomekong, Paulin Melatagia, Vanil Feudjieu and Gaoussou Camara, Extraction des connaissances ontologiques du code source Java en utilisant les Chaînes de Markov Cachées

Résumé : La construction des ontologies de domaine nécessite l’accès à la connaissance détenue par les experts du domaine ou contenue dans les sources de connaissances. Cependant, les experts ne sont pas toujours disponibles. Les ingénieurs de connaissances vont donc se tourner vers d’autres sources de connaissances telles que les documents textuels, les bases de données, le code source, etc. De nombreuses approches proposées dans la littérature pour extraire la connaissance des sources se limitent à l’extraction des terminologies, concepts, propriétés, laissant de côté les axiomes et les règles. Dans un travail récent, nous avons présenté l’utilisation des Chaînes de Markov Cachées pour extraire les connaissances ontologiques du code source Java. Dans cet article, évaluons cette approche en utilisant un parseur comme référence. Cette évaluation a donné les résultats suivants: rappel=87.8%, précision=100% et F-Mesure=93.5%.