Table of Contents
PAPER ABSTRACTS
JOURNALS
JBCS 2017
M A. Brandão, M. M. Moro. The Strength of Co-authorship Ties through Different Topological Properties. Journal of the Brazilian Computer Society 23(5), 2017.
ABSTRACT: Social networks are complex structures that describe individuals (graph nodes) connected in any social context (graph edges). Different metrics can be applied to those networks and their properties in order to understand behavior and even predict the future. One of such properties is tie strength, which allows to identify prominent individuals, analyze how relationships play different roles, predict links, and so on. Here, we specifically address the problem of measuring tie strength in co-authorship social networks (nodes are researchers and edges represent their co-authored publications). We start by presenting four cases that emphasize the problems of current metrics. Then, we propose a new metric for tie strength, called tieness, that is simple to calculate and better differentiates the degrees of strength. Accompanied with a nominal scale, tieness also provides better results when compared to the existing metrics. Our analyses consider three real social networks built from publications collected from digital libraries on Computer Science, Medicine, and Physics. Finally, we also make all datasets publicly available.
ComCom 2017
M A. Brandão, M. M. Moro. Social professional networks: A survey and taxonomy. Computer Communications 100(March): 20-31.
ABSTRACT: Social professional networks provide features not available in other networks. For example, LinkedIn and AngelList facilitate professional networking, and GitHub enables committing and sharing code. Such social networks also provide data with information about users, their behavior, interactions and posted content. Here, we aim to foster a deeper understanding of the social professional networks types, definitions, features, analyses and applications while providing a useful taxonomy about their use.
JBCS 2017b
L.H.C. Lima, G. Penha, L.M.A. Rocha, M.M. Moro, A.P.C. Silva, A. H. F. Laender, J.P. M. de Oliveira. The collaboration network of the Brazilian Symposium on Databases - 30 editions of history. Journal of the Brazilian Computer Society 23:10 (2017).
ABSTRACT: The Brazilian Symposium on Databases (SBBD) celebrated its 30th edition in October 2015. As the database community has evolved over the years, so has the data analysis area. To celebrate such accomplishments, this article goes over the SBBD history from distinct social perspectives. Overall, we investigate the complete SBBD co-authorship network built from bibliographic data of SBBD's 30 editions, from 1986 to 2015, and analyze several network metrics, considering the network evolution over the three decades. In particular, we analyze the progress of the most engaged SBBD authors, the number of distinct authors, institutions, and published papers, and the evolution of some of the most frequent terms presented in the titles of the papers, as well as the influence and impact of the most prominent SBBD authors.
DKE 2016
E. G. Barrosa, A. H.F. Laender, M. M. Moro, A. S. da Silva. LCA-based algorithms for efficiently processing multiple keyword queries over XML streams. Data & Knowledge Engineering 103 (May): 1–18, 2016
ABSTRACT: In a stream environment, differently from traditional databases, data arrive continuously, unindexed and potentially unbounded, whereas queries must be evaluated for producing results on the fly. In this article, we propose two new algorithms (called SLCAStream and ELCAStream) for processing multiple keyword queries over XML streams. Both algorithms process keyword-based queries that require minimal or no schema knowledge to be formulated, follow the lowest common ancestor (LCA) semantics, and provide optimized methods to improve the overall performance. Moreover, SLCAStream, which implements the smallest LCA (SLCA) semantics, outperforms the state-of-the-art, with up to 49% reduction in response time and 36% in memory usage. In turn, ELCAStream is the first to explore the exclusive LCA (ELCA) semantics over XML streams.
A comprehensive set of experiments evaluates several aspects related to performance and scalability of both algorithms, which shows they are effective alternatives to search services over XML streams.
Scientometrics 2016
T.H.P. Silva, G. Penha, A. P. C. da Silva, M. M. Moro. A performance indicator for academic communities based on external publication profiles. Scientometrics 107 (3): 1389-1403, 2016
ABSTRACT: Studying research productivity is a challenging task that is important for understanding how science evolves and crucial for agencies (and governments). In this context, we propose an approach for quantifying the scientific performance of a community (group of researchers) based on the similarity between its publication profile and a reference community’s publication profile. Unlike most approaches that consider citation analysis, which requires access to the content of a publication, we only need the researchers’ publication records. We investigate the similarity between communities and adopt a new metric named Volume Intensity. Our goal is to use Volume Intensity for measuring the internationality degree of a community. Our experimental results , using Computer Science graduate programs and including both real and random scenarios, show we can use publication profile as a performance indicator.
JIDM 2015
T. H. P. Silva, L. M. A. Rocha, A. P. C. Silva, M. M. Moro. 3c-index: Research Contribution Across Communities as na Influence Indicator. Journal of Information and Data Management 6 (3): 192-205, 2015
ABSTRACT:This paper proposes a new influence metric (called 3c-index) derived from bibliographic data and social networks analysis. Given a set of communities defined by publication venues, the goal is to measure the degree of influence of researchers by evaluating the links they establish between communities. Specifically, each researcher has a base community where he/she presents greater influence. Then, when such researcher works on a different community (besides the base community), he/she takes new knowledge to that community and transfers influence, which improves the global quality of the communities. By pondering such transfer, we measure the influence of researchers in their and across communities. We also experimentally evaluate the performance of the new index against well known metrics (volume of publications, number of citations and h-index). The results show 3c-index outperforms them in most cases and can be employed as a complementary metric to assess researchers’ productivity.
Scientometrics 2015
H. Lima, T. H. P. Silva, M. M. Moro, R. L. T. Santos, W. Meira Jr, A. H. F. Laender. Assessing the profile of top Brazilian computer science researchers. Scientometrics 103 (3): 879-896, 2015
ABSTRACT: Quantitative and qualitative studies of scientific performance provide a measure of scientific productivity and represent a stimulus for improving research quality. Whatever the goal (e.g., hiring, firing, promoting or funding), such analyses may inform research agencies on directions for funding policies. In this article, we perform a data-driven assessment of the performance of top Brazilian computer science researchers considering three central dimensions: career length, number of students mentored, and volume of publications and citations. In addition, we analyze the researchers’ publishing strategy, based upon their area of expertise and their focus on venues of different impact. Our findings demonstrate that it is necessary to go beyond counting publications to assess research quality and show the importance of considering the peculiarities of different areas of expertise while carrying out such an assessment.
JIDM 2014
M. A. Brandão, M. M. Moro, J. M. Almeida. Experimental Evaluation of Academic Collaboration Recommendation using Factorial Design. Journal of Information and Data Management 5 (1): 52-63, 2014
ABSTRACT: Recommender systems have been very used in e-commerce and online social networks. Among various challenges to construct such systems, how to parameterize them and their evaluations are two vaguely explored issues. Generally, each recommendation strategy has parameters and factors that can be varied. In this article, we propose to evaluate the impact of key parameters of two state-of-the-art functions that recommend academic collaborations. Our experimental results show that the factors affect recall, novelty, diversity and coverage of the recommendations in different ways. Finally, such evaluation shows the importance of studying the impact of the factors and factor interactions in the academic collaboration recommendations context.
JIDM 2013
D. A. Guimarães, F. L. Arcanjo, L. R. Antuna, M. M. Moro, R. A. C. Ferreira. Processing XPath Structural Constraints on GPU. Journal of Information and Data Management 4 (1): 47-56, 2013
ABSTRACT: Technologies such as CUDA and OpenCL have popularized the usage of graphics cards (GPUs) for general purpose programming, often with impressive performance gains. However, using such cards for speeding up XML Databases processing is yet to be fully explored. XML databases offer much flexibility for Web-oriented systems. Nonetheless, such flexibility comes at a considerable computational cost. This article shows how graphics cards can be leveraged to reduce the computational cost of processing an important subset of XPath queries. It presents an algorithm designed to consider the cost model of GPUs and to evaluate queries efficiently. An experimental study reveals that this algorithm is more efficient than implementations of a similar strategy on CPU for all the datasets tested. The speedups with respect to exist-db, a popular XML database system, are as high as two orders of magnitude.
IJCSA 2012
G.R. Lopes, R. da Silva, M.M. Moro, J.P.M. de Oliveira. Scientific Collaboration in Research Networks: A Quantification Method by Using Gini Coefficient. International Journal of Computer Science & Applications, 9 (2): 15-31, 2012
ABSTRACT: In the scientific community, it is very common to try to create sound metrics for practically everything that can be measured. One of the current trends is to consider aspects from social networks for defining evaluation metrics. Following such a trend, our work proposes applying the Gini coefficient for evaluating research networks from two different perspectives. The first one analyzes the temporal evolution of research networks by considering the Gini coefficient of the distribution of researchers who have co-authored publications. The second one compares different internal collaboration networks of graduate programs and applies the Gini coefficient to support the ranking creation task. Both ideas are demonstrated through experiments that show the validity and applicability of our approach for quantifying scientific collaborations. Moreover, we also propose a new index that combines two metrics for evaluating the collaboration network of graduate programs. We believe that this index should be easily applied to other groups and research areas. This is the first time that the Gini coefficient is applied over social networks for evaluating research.
JBCS 2012
C. Bigonha, T.N.C. Cardoso, M.M. Moro, M.A. Gonçalves, V.A.F. Almeida Sentiment-based influence detection on Twitter. Journal of the Brazilian Computer Society 18(3):169-183, 2012.
ABSTRACT: The user generated content available in online communities is easy to create and consume. Lately, it also became strategically important to companies interested in obtaining population feedback on products, merchandising, etc. One of the most important online communities is Twitter: recent statistics report 65 million new tweets each day. However, processing this amount of data is very costly and a big portion of the content is simply not useful for strategic analysis. Thus, in order to filter the data to be analyzed, we propose a new method for ranking the most influential users in Twitter. Our approach is based on a combination of the user position in networks that emerge from Twitter relations, the polarity of her opinions and the textual quality of her tweets. Our experimental evaluation shows that our approach can successfully identify some of the most influential users and that interactions between users provide the best evidence to determine user influence.
JIDM 2011
A.C. Hora, C. Davis Jr., M.M.Moro. Mapping Network Relationships from Spatial Database Schemas to GML Documents. Journal of Information and Data Management, vol 2, n 01, pg. 67-74.
ABSTRACT: Spatial data encoded in GML documents are used in various applications and are especially suited to storing, manipulating and exchanging geographic information. However, a large share of currently available spatial data is stored in spatial databases. This article presents a method to map arcs and nodes, organized in a network using spatial relationships, from a spatial database to a GML document. Specifically, a geographical conceptual schema and the corresponding GML schema are used as guide to retrieve and reorganize networking information found in the spatial database, thus generating a GML document. The proposed methodology is verified in a case study, in which networking relationships from real-world databases are mapped to GML documents that can be queried using standard XML languages such as XPath and XQuery.
JIDM 2010
E. G. Barros, M. M. Moro, A. H. F. Laender. An Evaluation Study of Search Algorithms for XML Streams. Journal of Information and Data Management, vol 1, n 03, October 2010.
ABSTRACT: Keyword-based searching services over XML streams are essential for widely used streaming applications, such as dissemination services, sensor networks and stock market quotes. However, XML stream keyword search algorithms are usually schema dependent and do not allow pure keyword queries. Furthermore, ranking methods are still relatively unexploited in such algorithms. This paper presents an accuracy and performance study of two keywordbased search algorithms for XML streams. Our study provides a comparison of these two algorithms by using an XPath benchmark as source of data and queries. Moreover, we also consider a large collection of XML documents and a large set of random queries, both based on DBLP dataset. Finally, we propose a strategy that combines both algorithms and ranks the keyword-based search results
SIGMOD Record 2009b
M. M. Moro, V. Braganholo, C. F. Dorneles, D. Duarte, R. Galante, R. S. Mello. XML: Some Papers in a Haystack. SIGMOD Record, vol 38, n 02, June 2009.
ABSTRACT: XML has been explored by both research and industry communities. More than 5500 papers were published on different aspects of XML. With so many publications, it is hard for someone to decide where to start. Hence, this paper presents some of the research topics on XML, namely: XML on relational databases, query processing, views, data matching, and schema evolution. It then summarizes some (some!) of the most relevant or traditional papers on those subjects.
SIGMOD Record 2009a
A. H. F. Laender, M. M. Moro, C.Nascimento, P. Martins. An X-Ray on Web-Available XML Schemas. SIGMOD Record, vol 38, n 01, March 2009.
ABSTRACT: XML has conquered its place as the most used standard for representing Web data. An XML schema may be employed for similar purposes of those from database schemas. There are different languages to write an XML schema, such as DTD and XSD. In this paper, we provide a general view, an X-Ray, on Web-available XSD files by identifying which XSD constructs are more and less frequently used. Furthermore, we provide an evolution perspective, showing results from XSD files collected in 2005 and 2008. Hence, we can also draw some conclusions on what trends seem to exist in XSD usage. The results of such study provide relevant information for developers of XML applications, tools and algorithms in which the schema has a distinguished role.
VLDB 2008
M. M. Moro, Z. Vagena, V. J. Tsotras. XML Structural Summaries, Tutorial. PVLDB, vol. 1, n. 2, 1524-1525. 2008.
ABSTRACT: This tutorial introduces the concept of XML Structural Summaries and describes their role within XML retrieval. It covers the usage of those summaries for Database-style query processing and Information Retrieval-style search tasks in the context of both centralized and distributed environments. Finally, it discusses new retrieval scenarios that can potentially be favorably supported by those summaries.
JUCS 2006
R. Machado, A.F. Moreira, R.M. Galante, M.M. Moro. Type-safe Versioned Object Query Language. Journal of Universal Computer Science, Vol. 12, No. 7, pp. 938-957, September 2006.
ABSTRACT: The concept of versioning was initially proposed for controlling design evolution on computer aided design and software engineering. On the context of database systems, versioning is applied for managing the evolution of different elements of the data. Modern database systems provide not only powerful data models but also complex query languages that have evolved to include several features from complex programming languages. While most related work focuses on different aspects of the concepts, designing models, and processing of versions efficiently, there is yet to be a formal definition of a query language for database systems with versions control. In this work we propose a query language, named Versioned Object Query Language (VOQL), that extends ODMG Object Query Language (OQL) with new features to recover object versions. We provide a precise definition of VOQL through a type system and we prove it safe in relation to a small-step operational semantics. Finally, we validate the proposed definition by implementing an interpreter for VOQL.
IEEE Potentials 2005
M.M. Moro, V.P. Braganholo, A.C. Nácul, M.R. Fornari. The Successful Grad Student. IEEE Potentials, n.03, vol.24, August-September, 2005.
ABSTRACT: You are starting a graduate program, or just thinking about it, and you have lots of questions. Where do you start? What do you do? What can you expect from the course? Based on our experience and the problems our colleagues have faced, we present some ideas and practical suggestions to help new students to succeed in this incredible journey, a graduate program in computer science. We don’t intend to exhaust the topic. That would be virtually impossible as there are thousands of different situations, as well as lots of articles with tips for graduate students.
This article is a collection of tips and an overview of the process through which all graduate students pass during their journey. Since we are more familiar with the database area, some of the examples are in this field. But we believe that this guide can be equally important to students of all fields in computer science and possibly to students
RITA 2004
M.M. Moro, V.P. Braganholo, A.C. Nácul, M.R. Fornari. Rumo ao Título de Doutor/Mestre. Revista de Informática Teórica e Aplicada, n.02, vol.10, Porto Alegre: Instituto de Informática, 2004.
( PDF | BibTex ) – In Portuguese
ABSTRACT: A significant portion of new students begins a graduate course in Computer Science without a clear idea of where to start and what to expect from the course. With this reality in mind, the goal of this paper is to provide an initial guide for those new students. Hence, we present practical tips and ideas, as a “talk between friends” reading, hoping to foster a good start and lead during the course.
RITA 2002
M.M. Moro, N. Edelweiss, C.S. dos Santos. Modelo Temporal de Versões. Revista de Informática Teórica e Aplicada, n.01, vol.9, ISSN 0103-4308, Porto Alegre: Instituto de Informática, 2002.pp.37-51
( PDF | BibTex ) – In Portuguese
ABSTRACT: This work presents an alternative for the union of temporal data and a version model. The result, the Temporal Versions Model, is able to store object versions and, for each version, the history of its dynamic attributes and relationships values. TVM is ideal for modeling time-evolving systems that need to manage design alternatives as versions. An interface for modeling TVM classes is also presented.
CONFERENCES & WORKSHOPS
JCDL 2013
H. Lima, T. H. P. Silva, M. M. Moro, R. L. T. Santos, W. Meira Jr., A. H. F. Laender. Aggregating Productivity Indices for Ranking Researchers across Multiple Areas. In: ACM/IEEE Joint Conference in Digital Library (JCDL), 2013.
ABSTRACT: The impact of scientific research has traditionally been quantified using productivity indices such as the well-known h-index. On the other hand, different research fields—in fact, even different research areas within a single field—may have very different publishing patterns, which may not be well described by a single, global index. In this paper, we argue that productivity indices should account for the singularities of the publication patterns of different research areas, in order to produce an unbiased assessment of the impact of scientific research. Inspired by ranking aggregation approaches in distributed information retrieval, we propose a novel approach for ranking researchers across multiple research areas. Our approach is generic and produces cross-area versions of any global productivity index, such as the volume of publications, citation count and even the h-index. Our thorough evaluation considering multiple areas within the broad field of Computer Science shows that our cross-area indices outperform their global counterparts when assessed against the official ranking produced by CNPq, the Brazilian National Research Council for Scientific and Technological Development. As a result, this paper contributes a valuable mechanism to support the decisions of funding bodies and research agencies, for example, in any research assessment effort.
SIMPLEX 2013
M. Brandão, M. M. Moro, G. R. Lopes, J. P. M. Oliveira. Using Link Semantics to Recommend Collaborations in Academic Social Networks. In: Work. on Simplifying Complex Networks for Practitioners (SIMPLEX) - WWW Companion Volume, Rio de Janeiro, Brazil 2013.
ABSTRACT: Social network analysis (SNA) has been explored in many contexts with different goals. Here, we use concepts from SNA for recommending collaborations in academic networks. Recent work shows that research groups with well connected academic networks tend to be more prolific. Hence, recommending collaborations is useful for increasing a group's connections, then boosting the group research as a collateral advantage. In this work, we propose two new metrics for recommending new collaborations or intensification of existing ones. Each metric considers a social principle (homophily and proximity) that is relevant within the academic context. The focus is to verify how these metrics influence in the resulting recommendations. We also propose new metrics for evaluating the recommendations based on social concepts (novelty, diversity and coverage) that have never been used for such a goal. Our experimental evaluation shows that considering our new metrics improves the quality of the recommendations when compared to the state-of-the-art.
SIGMOD 2013
W. Viana, Mirella M. M. Moro. FriendRouter: real-time path finder in social networks. In: SIGMOD Undergraduate Research Poster Competition, New York City, USA, 2013.
ABSTRACT: Online social networks have become a platform for running and optimizing classical algorithms. Here, we introduce a tool for finding paths between social network users in real-time, a task that classical solutions are not tailored for.
SIGMOD 2012
E. M. Barbosa, M. M. Moro, G.R. Lopes, J.P.M. Oliveira. VRRC: Web Based Tool for Visualization and Recommendation on Co-Authorship Network. In: SIGMOD Undergraduate Research Poster Competition, Scottsdale, USA, 2012. ( BibTex | PDF )
ABSTRACT: Scientific studies are usually developed by contributions from different researchers. Analyzing such collaborations is often necessary, for example, when evaluating the quality of a research group. Also, identifying new partnership possibilities within a set of researchers is frequently desired, for example, when looking for partners in foreign countries. Both analysis and identification are not easy tasks, and are usually done manually. This work presents VRRC, a new approach for visualizing recommendations of people within a co-authorship network (i.e., a graph in which nodes represent researchers and edges represent their co-authorships). VRRC input is a publication list from which it extracts the co-authorships. VRRC then recommends which relations could be created or intensified based on metrics designed for evaluating co-authorship networks. Finally, VRRC provides brand new ways to visualize not only the final recommendations but also the intermediate interactions within the network, including: a complete representation of the co-authorship network; an overview of the collaborations evolution over time; and the recommendations for each researcher to initiate or intensify cooperation. Some visualizations are interactive, allowing to filter data by time frame and highlighting specific collaborations. The contributions of our work, compared to the state-of-art, can be summarized as follows: (i) VRRC can be applied to any co-authorship network, it provides both net and recommendation visualizations, it is a Web-based tool and it allows easy sharing of the created visualizations (existing tools do not offer all these features together); (ii) VRRC establishes graphical representations to ease the visualization of its results (traditional approaches present the recommendation results through simple lists or charts); and (iii) with VRRC, the user can identify not only new possible collaborations but also existing cooperation that can be intensified (current recommendation approaches only indicate new collaborations). This work was partially supported by CNPq, Brazil.
BRASNAM 2012
W. Viana, M.M.Moro. Busca de Caminhos entre Usuários de Redes Sociais em Tempo Real. I Brazilian Workshop on Social Network Analysis and Mining, 2012.
ABSTRACT: A distância média entre nós em uma rede social é pequena, considerandoa teoria dos seis graus de separação. No entanto, as redes sociais online não oferecem formas de descobrir caminhos entre seus usuários. Algoritmos tradicionais são aplicáveis para cópias offline de seus grafos. Contudo,na Web, o ideal é encontrar caminhos utilizando dados online, o que é uma tarefa difícil considerando as limitações de acesso impostas pelas redes sociais.Neste trabalho, introduzimos um algoritmo para encontrar caminhos em temporeal, chamado CUTE. Ele utiliza uma heurpistica que considera a distância geográfica entre os usuários. Na nossa avaliação experimental com o Twitter, oCUTE encontra caminhos curtos entre usuários, expandindo menos de 40 nós.
CIKM 2011
F.C. Hummel, A.S. da Silva, M.M. Moro, A.H.F. Laender. Multiple keyword-based queries over XML streams. In: ACM Conference on Information and Knowledge Management (CIKM), Glasgow, UK, 2011.
ABSTRACT: In this paper, we propose that various keyword-based queries be processed over XML streams in a multi-query processing way. Our algorithms rely on parsing stacks designed for simultaneously matching terms from several distinct queries and use new query indexes to speed up search operations when processing a large number of queries. Besides defining a new problem and novel solutions, we perform experiments in which aspects related to performance and scalability are examined.
ER-WISM 2011
J.P.M. de Oliveira, G.R. Lopes, M.M. Moro Academic Social Networks. In: ER Workshops, pages 2-3, Brussels, Belgium, 2011.
ABSTRACT: The growth of Web 2.0 encouraged the consideration not only of technological and content aspects but also the social interactions and its relational aspects. Researches in the academic context also have followed this trend. Methods and applications have been proposed and adapted to consider “social aspects” by different modes. In this paper, we present an overview of our publications focusing on Academic Social Networks including proposals of analysis, dissemination and recommendation for this context.
ICITA 2011
G.R. Lopes, M.M. Moro, R. da Silva, E.M. Barbosa, J.P.M. Oliveira. Ranking Strategy for Graduate Programs Evaluation. In: ICITA - 7th International Conference on Information Technology and Application, Sydney, Australia, 2011.
ABSTRACT: The demand for quality assessment criteria and associated evaluation methods in academia is increasing and has been the focus of many studies in the last decade. This growth arises due to the pursuit of academic excellence and support for the decision making of funding agencies. The high pressure from such scenario requires quality criteria objectively defined. In this paper, we develop an assessment procedure for graduate programs evaluation based on the internal collaborations among their research groups. These collaborations are evaluated through analysis on co-authorships networks based on novel metrics of social interaction. Furthermore, our procedure is easily reproduced and may be customized for evaluating any set of research groups. Our experiments show that the ranking provided by our metrics are according to the based (which is the official ranking defined by a national agency)
JCDL 2011
A. H. F. Laender, M. M. Moro, M. A. Gonçalves, C. A. Davis Jr., A. S. da Silva, A. J. C. Silva, C. A. S. Bigonha, D. H. Dalip, E. M. Barbosa, Eli Cortez, P. S. Procópio Jr., R. Odon de Alencar, T. N. C. Cardoso, T. Salles. Building a research social network from an individual perspective. In: Proceedings of the 2011 Joint International Conference on Digital Libraries, Canada.
ABSTRACT: In this poster paper, we present an overview of CiênciaBrasil, a research social network involving researchers within the Brazilian INCT program. We describe its architecture and the solutions adopted for data collection, extraction, and deduplication, and for materializing and visualizing the network.
SBBD 2011
P.S.Procopio Jr., A.H.F. Laender, M.M. Moro. Análise da Rede de Coautoria do Simpósio Brasileiro de Bancos de Dados. In: Sessão de Pôsteres, Simpósio Brasileiro de Banco de Dados, Florianopolis, Brazil, 2011.
RESUMO: Este artigo apresenta uma análise da rede de coautoria do Simpósio Brasileiro de Banco de Dados (SBBD) que em 2010 completou 25 anos de existência, consolidando-se como o maior e mais importante evento da América Latina para apresentação e discussão de resultados de pesquisa relacionados à área de bancos de dados. Para isso, coletamos dados bibliográcos de todas as suas 25 edições e apresentamos uma série de estatísticas, tais como média de artigos por autor, média de artigos por edição, média de coautores por artigo, entre outras. Além disso, construímos e analisamos a rede de coautoria do SBBD, fazendo uma análise tanto de suas características estruturais quanto de sua evolução temporal. Mostramos, ainda, que a rede em questão segue um fenômeno típico de diversas outras redes sociais, conhecido como mundo pequeno
SBBDdemo 2011
E.M. Barbosa, M.M. Moro, G.R. Lopes, J.P.M. Oliveira. VRRC: Uma Ferramenta Web para Visualização e Recomendação em Redes de Coautoria. In: VIII Sessão de Demos, Simpósio Brasileiro de Banco de Dados, Florianopolis, Brazil, 2011.
RESUMO: Este artigo propõe uma ferramenta Web que recebe de entrada qualquer lista de publicações e gera diversas visualizações e recomendações sobre a rede de coautoria formada a partir dessa lista. A ferramenta é uma solução prática e rápida para obter análises aprofundadas de uma rede de coautoria.
ABSTRACT: This paper proposes a Web-based tool that receives any publication list and generates several visualizations and recomendations on the coauthorship network from this list. The tool is a practical and fast solution to obtain further analysis of a co-authorship network.
SEMISH 2011
A. H. F. Laender, M. M. Moro, M. A. Gonçalves, C. A. Davis Jr., A. S. da Silva, A. J. C. Silva, C. A. S. Bigonha, D. H. Dalip, E. M. Barbosa, Eli Cortez, P. S. Procópio Jr., R. Odon de Alencar, T. N. C. Cardoso, T. Salles. Building a research social network from an individual perspective. In: Proceedings of the 2011 Joint International Conference on Digital Libraries, Canada.
ABSTRACT: Research social networks are a potentially useful resource for studying science and technology indicators from specific communities (e.g., a country). However, building and analyzing such networks beget challenges beyond those from regular social networks, since data about people and their relationships are usually dispersed across various sources. In this paper, we present a research social network built from an individual perspective by gathering data from a Brazilian curricula vitae repository. We describe its architecture and the solutions adopted for data collection, extraction and deduplication, and for materializing and visualizing the network.
WWW/Internet 2011
G.R. Lopes, M.M. Moro, J.P.M. Oliveira. Temporal Influence in Collaborators Recommendation in Social Networks. In: IADIS International Conference WWW/Internet, Rio de Janeiro, Brazil, 2011.
ABSTRACT: In the last decade, defining recommendations considering Social Networks has been the focus of many studies. Such studies propose new techniques for optimizing different aspects such as the user profile generation and maintenance, the recommendation function and the user connections. Many of those consider mostly the connections established within the social networks, disregarding the rich information that can be extracted from them. In this work, we propose an overall function for recommending collaborations based on a co-authorship Social Network. These collaborations are evaluated and weighted through temporal analysis on co-author relationships. Experiments show that considering temporal aspects can lead to improvements in the ordering of recommendation results. Moreover, this can be used to reduce the number of relationships considered to generate the recommendations.
WebMedia 2010
C. A. S. Bigonha, T. N. C. Cardoso, M. M. Moro, V. A. F. Almeida, M. A. Gonçalves. Detecting Evangelists and Detractors on Twitter. In: Proceedings of the Brazilian Symposium on Multimedia and the Web (WebMedia), 2010, Belo Horizonte, Brazil.
ABSTRACT: Social networking websites provide a suitable environment for interaction and topic discussion. With the growing popularity of online communities, estimulated by the easiness with which content can be created and consumed, some of this content became strategical for companies interested in obtaining population feedback for products, personalities, etc. One of the most important of such websites is Twitter: recent statistics report 50 million of new tweets each day. However, processing this amount of data is very costly and a big part of it is simply not useful for strategic analysis. In this paper, we propose a new technique for ranking the most influential users in Twitter based on a combination of the user position in the network topology, the polarity of her opinions and the textual quality of her tweets. In addition, we develop and compare two methods for calculating the network influence. We also performed experiments with a real dataset containing one month of posts regarding soda brands. Our experimental evaluation shows that our approach can successfully identify some of the most influential users and that interactions between users are the best evidence to determine user influence.
WISM 2010
G.R. Lopes, M.M. Moro, L. K. Wives, J.P.M. de Oliveira. Collaboration Recommendation on Academic Social Networks. In: Proceedings of ER Workshops - International Workshop on Web Information Systems Modeling (WISM), 2010, Vancouver, Canada.
ABSTRACT: In the academic context, scientific research works are often performed through collaboration and cooperation between researchers and research groups. Researchers work in various subjects and in several research areas. Identifying new partners to execute joint research and analyzing the level of cooperation of the current partners can be very complex tasks. Recommendation of new collaborations may be a valuable tool for reinforcing and discovering such partners. This paper presents an innovative approach to recommend collaborations on the context of academic Social Networks. Specifically, we introduce the architecture for such approach and the metrics involved in recommending collaborations. We also present an initial case study to validate our approach.
AMW 2010a
G.R. Lopes, M.M. Moro, L. K. Wives, J.P.M. de Oliveira. Cooperative Authorship Social Network. In: IV Alberto Mendelzon Workshop on Foundations of Data Management (AMW), 2010, Buenos Aires, Argentina.
ABSTRACT: This paper introduces a set of challenges for developing a dissemination service over a Web collaborative network. We dene specific metrics for working on a co-authorship research social network. As a case study, we build such a network using those metrics and compare it to a manually built one. Specically, once we build a collaborative network and verify its quality, the overall eectiveness of the dissemination services will also be improved.
AMW 2010b
A.C. Hora, C.A. Davis Jr., M.M. Moro. Generating XML/GML Schemas from Geographic Conceptual Schemas. In: IV Alberto Mendelzon Workshop on Foundations of Data Management (AMW), 2010, Buenos Aires, Argentina.
ABSTRACT: A large volume of data with complex structures is currently represented in GML (Geography Markup Language) for storing and exchanging geographic information. As the size and complexity of such documents and their schemas grow, techniques and rules for designing and creating such documents become indispensable. This paper introduces a method for mapping geographic conceptual speci?cations (de?ned in OMT-G) to GML Schema. Our method avoids semantic or structural losses and provides redundancy-free data. It also reduces the use of integrity constraints and improves the nesting of XML elements in the resulting schema. We have implemented the method in order to automate the process of obtaining the target schema from the original geographic model. Experimental results show that spatial and non-spatial queries over the GML documents created from schemas generated using our method are more ef?cient than on documents created with a traditional, direct mapping process.
SEKE 2009
M. M. Moro, D.B. Saccol, R.M. Galante. TRIple Content-based OnTology (TRICOt) for XML Dissemination. In: International Conference on Software Engineering and Knowledge Engineering (SEKE), 2009, Boston, USA.
ABSTRACT: As Internet and distributed systems evolve, content dissemination systems become a hot topic for researchers in those areas. In such systems, users define profiles (queries) that must be evaluated over incoming messages (documents), usually on streams. Given the high number of profiles and the considerable flow of incoming messages on such systems, research problems reach new levels of complexity on databases and software engineering perspectives as well. For example, those features make distributed query evaluation even more complex. In this context, we propose to expand the use of ontologies to this new context of stream processing. Our initial evaluation shows that such solution is viable and opens new possibilities for using the whole potential of ontologies in a very diverse set of applications.
SEMISH 2009
M. M. Moro, R. M. Galante, D. Saccol, B. Loscio. Disseminação de Conteúdo XML Baseada em Ontologias. In: Seminário Integrado de Software e Hardware (SEMISH). Anais do XXIX Congresso da Sociedade Brasileira de Computação (SBC), 2009.
( BibTex | PDF)
ABSTRACT: As Internet and distributed systems evolve, a new paradigm aggregates the concept of content dissemination to XML query engines. The query is still evaluated over the stored data, but it is also registered into the system. Then, those queries will also be evaluated over the incoming data such that the documents that satisfy them are disseminated back to the users. In such a context, this paper proposes, that ontologies be applied in order to improve the performance of content-based dissemination systems. Our initial experimental evaluation shows that such solution is viable and exhibits considerable advantage over the state-of-the-art techniques.
WTDBD 2009
F. C. Hummel, A. S. Silva , M. M. Moro. Especificação de Perfis Baseados em Palavras-chave em Disseminação de Documentos XML. In: Workshop on Thesis and Dissertations in Databases (WTDBD), Brazilian Symposium on Databases (SBBD), 2009.
RESUMO: O conceito de palavras-chave é largamente utilizado, especialmente por máquinas de busca na Web. Os usuários já estão habituados com sua facilidade de uso dada a popularidade destas ferramentas. Recentemente, tem ganho atenção o campo de Disseminação de documentos XML focando a construção de aplicações de larga escala, que utilizam linguagens de consulta XML para representar perfis de usuários. Este trabalho tem como objetivo desenvolver uma abordagem para representação de perfis através de palavras-chave em Disseminação de documentos XML. Esta abordagem utilizará um sistema que tem como objetivo derivar uma consulta estruturada XML (XPath) a partir de uma conjunto de palavraschave fornecida pelo usuário.
DATAX 2008
Z. Vagena, M. M. Moro. Semantic Search over XML Document Streams. In: International Workshop on Database Technologies for Handling XML Information on the Web (DATAX), 2008, Nantes, France.
ABSTRACT: A large number of web data sources, such as blogs, news sites and podcast hosts, are currently disseminating their content in the form of streaming XML documents. The variability and heterogeneity of those sources make the employment of traditional querying schemes, which are based on structured query languages, cumbersome for the end user (those languages require precise knowledge of the underlying schema of each queried data source in order to be able to formulate meaningful queries). On the other hand, keyword search provides an alternative retrieval paradigm that is both simple and effective. Its importance for XML retrieval is well established and many specialized XML search engines have already appeared. Those engines support semantic search over XML documents within persistent environments, in which XML documents are permanently stored and can be indexed for efficient retrieval. Nevertheless, to the best of our knowledge, there is currently no published work that focuses on a streaming environment. In this paper, we attempt to fill this gap and study the problem of semantic keyword search over streaming XML documents. In particular, we build on previous work on semantic search over stored XML documents and propose a retrieval language that is simple and enables semantic search over XML documents. We then devise novel, online query processing algorithms that can answer semantic search queries over streaming XML data.
SEMISH 2008
C. M. D. S. Freitas, L. P. Nedel, R. Galante, L. C. Lamb, A. S. Spritzer, S. Fujii, J. P. M. de Oliveira, R. M. Araújo, M. M. Moro. Extração de Conhecimento e Análise Visual de Redes Sociais. In: Seminário Integrado de Software e Hardware (SEMISH). Anais do XXVIII Congresso da Sociedade Brasileira de Computação (SBC), 2008.
ABSTRACT: A social network is a graph where people or organizations (depending on the application) are represented as nodes connected by edges that can refer to either tight social bonds or some common, shared aspect. The graph structure analysis and the statistical analysis of specific node/edge attributes can reveal important individuals, relationships, and clusters. New information continues to be collected and stored, and size and complexity of the semantic graphs overwhelm the human cognitive abilities. Hence, it is necessary to improve the computational mechanisms to analyze such volume of data. In this paper, we focus on analyzing the information from social networks, extracting relevant knowledge, and visualizing the facts resultant from the analysis.
WTDBD 2008
A. B. Perini, R. M. Galante, M. M. Moro. AXEES: Adapting XML Queries on Evolving Schemas. In: Workshop on Thesis and Dissertations in Databases (WTDBD), Brazilian Symposium on Databases (SBBD), 2008.
RESUMO: Esquemas e documentos XML evoluem ao longo do tempo para acomodar adequadamente os dados e suas especificações. Vários trabalhos propõem técnicas para controlar a evolução dos dados XML, mas preservar as consultas funcionando durante a evolução do esquema ainda representa um grande desafio. Este trabalho propõe um mecanismo capaz de adaptar automaticamente consultas em documentos XML cujos esquemas evoluem ao longo do tempo. As principais contribuições são a análise do impacto de diferentes operações de evolução de esquemas XML sobre consultas e a especificação de processos de revalidação e adaptação a serem aplicados sobre as consultas. O mecanismo ainda possui a vantagem de eliminar a necessidade de ajuste manual das definições de consultas durante a evolução dos esquemas XML associados.
SBBD 2007
M.M. Moro, Z. Vagena. The Role of Structural Summaries for XML Retrieval, Tutorial. In: 22nd Brazilian Symposium on Databases (SBBD), October 2007, João Pessoa, Brazil.
ABSTRACT: A Structural Summary of an XML document is a dynamically generated and maintained graph structure that preserves the structural characteristics of the document in a compact form. The versatility of structural summaries has been established with their extensive usage for diverse retrieval tasks. Within traditional XML query processing those structures have been used as primary indexes on the structure, as well as for (a) structure discovery, (b) query formulation, rewrite and optimization, © storage of statistics, and other important metadata information. At the same time, structural summaries have appeared within other XML retrieval scenarios including (a) XML keyword search, (b) information discovery within P2P systems, and © message routing within publish/subscribe systems. This tutorial introduces the concept of XML Structural Summaries and describes their role within XML retrieval. It covers the usage of those summaries for Database-style query processing, as well as Information Retrieval-style search tasks in the context of both centralized and distributed environments. Finally, it concludes with a presentation of new retrieval scenarios that can potentially be favorably supported by those summaries.
VLDB 2007
M.M. Moro, P. Bakalov, V.J. Tsotras. Early Profile Pruning on XML-aware Publish/Subscribe Systems. In: 33rd International Conference on Very Large Data (VLDB), September 2007, Viena, Austria.
ABSTRACT: Publish-subscribe applications are an important class of content-based dissemination systems where the message transmission is defined by the message content, rather than its destination IP address. With the increasing use of XML as the standard format on many Internet-based applications, XML aware pub-sub applications become necessary. In such systems, the messages (generated by publishers) are encoded as XML documents, and the profiles (defined by subscribers) as XML query statements. As the number of documents and query requests grow, the performance and scalability of the matching phase (i.e. matching of queries to incoming documents) become vital. Current solutions have limited or no flexibility to prune out queries in advance. In this paper, we overcome such limitation by proposing a novel early pruning approach called Bounding-based XML Filtering or BoXFilter. The BoXFilter is based on a new tree-like indexing structure that organizes the queries based on their similarity and provides lower and upper bound estimations needed to prune queries not related to the incoming documents. Our experimental evaluation shows that the early profile pruning approach offers drastic performance improvements over the current state-of-the-art in XML filtering.
SIGMOD 2007
M.M. Moro, L. Lim, Y-C Chang. Schema Advisor for Hybrid Relational-XML DBMS. In: 26th ACM SIGMOD International Conference on Management of Data (SIGMOD), June 2007, Beijing, China.
ABSTRACT: In response to the widespread use of the XML format for document representation and message exchange, major database vendors support XML in terms of persistence, querying and indexing. Speci¯cally, the recently released IBM DB2 9 (for Linux, Unix and Windows) is a hybrid data server with optimized management of both XML and relational data. With the new option of storing and querying XML in a relational DBMS, data architects face the the decision of what portion of their data to persist as XML and what portion as relational data. This problem has not been addressed yet and represents a serious need in the industry. Hence, this paper describes ReXSA, a schema advisor tool that is being prototyped for IBM DB2 9. ReXSA proposes candidate database schemas given an information model of the enterprise data. It has the advantage of considering qualitative properties of the information model such as reuse, evolution and performance pro¯les for deciding how to persist the data. Finally, we show the viability and practicality of ReXSA by applying it to custom and real usecases.
ICDE 2007
Z. Vagena, M.M. Moro, V.J. Tsotras. RoxSum: Leveraging Data Aggregation and Batch Processing for XML Routing. In: 23rd International Conference on Data Engineering (ICDE), April 2007, Istanbul, Turkey.
ABSTRACT: Content-based routing is the primary form of communication within publish/subscribe systems. In those systems data transmission is performed by sophisticated overlay networks of content-based routers, which match data messages against registered subscriptions and forward them based on this matching. Despite their inherent complexities, such systems are expected to deliver information in a timely and scalable fashion. As a result, their successful deployment is a strenuous task. Relevant efforts have so far focused on the construction of the overlay network and the filtering of messages at each broker. However, the efficient transmission of messages has received less attention. In this work, we propose a solution that gracefully handles the transmission task, while providing performance benefits for the matching task as well. Along those lines, we design RoXSum, a message representation scheme that aggregates the routing information from multiple documents in a way that permits subscription matching directly on the aggregated content. Our performance study shows that RoXSum is a viable and effective technique, as it speeds up message routing for more than an order of magnitude.
WEBDB 2007
Z. Vagena, M.M. Moro, V.J. Tsotras. Value-Aware RoXSum: Effective Message Aggregation for XML-Aware Information Dissemination. In: 10th International Workshop on the Web and Databases (WebDB), June 2007, Beijing, China.
ABSTRACT: Publish/subscribe (or pub/sub) systems perform asynchronous message transmission, from publishers to subscribers, without any of the parties having knowledge of the other. The pub/sub infrastructure manages the delivery of the messages, which is guided by user subscriptions that specify the type of information the subscribers are interested in. Since XML prevails as the standard for information exchange, e±cient XML-aware pub/sub systems become necessary. Within that context, we propose VA-RoXSum, a novel message representation scheme that aggregates the content of messages in a space e±cient manner. Coupled with specialized processing algorithms that operate on its aggregated content, the VA-RoXSum enables the batch processing of groups of messages and considerably improves the performance of the subscription-guided filtering task. Our preliminary experiments show that a pub/sub infrastructure with VA-RoXSum achieves up to two orders of magnitude faster matching, compared with state-of-the-art alternatives, which operate on the original messages.
WWW 2007
M.M. Moro, S. Malaika, L. Lim. Preserving XML Queries during Schema Evolution. In: 16th International World Wide Web Conference (WWW), May 2007, Banff, Canada.
ABSTRACT: In XML databases, new schema versions may be released as frequently as once every two weeks. This poster describes a taxonomy of changes for XML schema evolution. It examines the impact of those changes on schema validation and query evaluation. Based on that study, it proposes guidelines for XML schema evolution and for writing queries in such a way that they continue to operate as expected across evolving schemas.
SBBD 2006
R. Machado, A.F. Moreira, R.M. Galante, M.M. Moro. A Query Language for a Versioned Object Oriented Database. In: 21st Brazilian Symposium on Databases (SBBD), October 2006, Florianopolis, Brazil.
ABSTRACT: Many applications require that all data updates be stored on and retrieved from a database. Such requirement is supported on object oriented databases through versioning. While most related work focuses on different aspects of versions concepts, design modeling and efficient processing of versions, there is yet to be a precise definition of a query language for database systems with versions control. Therefore, we define a query language (called VOQL, Versioned Object Query Language) for an object oriented database with versioning support. VOQL extends ODMG and OQL for managing the evolution of different elements of the data. Besides the language main features, we provide the base of a formal definition for VOQL. Finally, we validate the proposed definition by implementing an interpreter for the language.
WWW 2006
M.M. Moro, Z. Vagena, V.J. Tsotras. Evaluating Structural Summaries as Access Methods for XML. In: 15th International World Wide Web Conference (WWW), May 2006, Edinburgh, Scotland.
ABSTRACT: Structural summaries are data structures that preserve all structural features of XML documents in a compact form. We investigate the applicability of the most popular summaries as access methods within XML query processing. In this context, issues like space and false positives introduced by the summaries need to be examined. Our evaluation reveals that the additional space required by the more precise structures is usually small and justified by the considerable performance gains that they achieve.
VLDB 2005
M.M. Moro, Z. Vagena, V.J. Tsotras. Tree-Pattern Queries on a Light-weight XML Processor. In: International Conference on Very Large Databases (VLDB), August 2005, Trondheim, Norway.
ABSTRACT: Popular XML languages, like XPath, use “treepattern” queries to select nodes based on their structural characteristics. While many processing methods have already been proposed for such queries, none of them has found its way to any of the existing “lightweight” XML engines (i.e. engines without optimization modules). The main reason is the lack of a systematic comparison of query methods under a common storage model. In this work, we aim to fill this gap and answer two important questions: what the relative similarities and important differences among the tree-pattern query methods are, and if there is a prominent method among them in terms of effectiveness and robustness that an XML processor should support. For the first question, we propose a novel classification of the methods according to their matching process. We then describe a common storage model and demonstrate that the access pattern of each class conforms or can be adapted to conform to this model. Finally, we perform an experimental evaluation to compare their relative performance. Based on the evaluation results, we conclude that the family of holistic processing methods, which provides performance guarantees, is the most robust alternative for such an environment.
IDEAS 2004
Z. Vagena, M.M. Moro, V.J. Tsotras. Efficient Processing of XML Containment Queries using Partition-Based Schemes. In: 8th International Database Engineering & Applications (IDEAS), July 2004, Coimbra, Portugal.
ABSTRACT: XML query languages provide facilities to query XML data both on their value as well as their structure. A basic operation in processing and optimizing such queries is the containment join, which takes two sets of elements and returns pairs of elements where one is the ancestor (or descendant) of the other. Most of the techniques proposed so far assume that the two sets are already sorted or utilize preexisting indexing schemes. In contrast, a partition-based technique does not require indexing or sorting. Instead, the containment join is processed by dividing the input sets into smaller partitions. In this paper, we present a new partition-based scheme that gracefully adapts to different document sizes. The advantages of our approach are validated through an experimental comparison with previous work. Moreover, the experiments demonstrate that our solution provides a viable alternative to non-partition join algorithms when the input data is neither sorted nor indexed.
WEBDB 2004
Z. Vagena, M.M. Moro, V.J. Tsotras. Twig Query Processing over Graph-Structured XML Data. In: 7h International Workshop on the Web and Databases (WebDB), held with ACM International Conference on Management of Data (SIGMOD), June 2004, Paris, France.
ABSTRACT: XML and semi-structured data is usually modeled using graph structures. Structural summaries, which have been proposed to speedup XML query processing have graph forms as well. The existent approaches for evaluating queries over tree structured data (i.e. data whose underlying structure is a tree) are not directly applicable when the data is modeled as a random graph. Moreover, they cannot be applied when structural summaries are employed and, to the best of our knowledge, no analogous techniques have been reported for this case either. As a result, the potential of structural summaries is not fully exploited. In this paper, we investigate query evaluation techniques applicable to graph-structured data. We propose efficient algorithms for the case of directed acyclic graphs, which appear in many real world situations. We then tailor our approaches to handle other directed graphs as well. Our experimental evaluation reveals the advantages of our solutions over existing methods for graph-structured data.
RIDE 2004
Z. Vagena, M.M. Moro, V.J. Tsotras. Supporting Branched Versions on XML documents. In: 14th International Workshop on Research Issues on Data Engineering (RIDE-WS-ECEG), held with 20th International Conference on Data Engineering (ICDE), March 2004, Boston, USA.
ABSTRACT: Many e-commerce and e-government applications are collaborative in nature (e.g. negotiation and e-catalog management). In collaborative environments, users typically define new document versions from any past version, which creates the need for supporting multiversion XML documents, particularly branched versioning. In this paper, we address the problem of evaluating path expression queries over XML documents with branched versions. We extend path joins to work in a branched version environment and to allow queries on multiple versions. We propose a storage scheme that efficiently maintains all branched document versions and describe changes required on Pathstack, an optimal pattern matching algorithm. Finally, we investigate the effectiveness of our techniques through experimental evaluation.
CLEI 2002
M.M. Moro, N. Edelweiss, C.S. dos Santos. Temporal Versions Model. In: IX Concurso de Tesis de Maestría Clei-UNESCO, XXVIII Conferencia Latinoamericana de Informatica, November 2002, Montevideo, Uruguay, p.116. First Prize.
ABSTRACT: This work presents an alternative for the union of temporal data and a version model. The result, the Temporal Versions Model, is able to store object versions and, for each version, the history of its dynamic properties values. TVM is ideal for modeling time-evolving systems that need to manage design alternatives as versions. One of the main features of our model is the possibility of having two different time orders, branched time for the object and linear time for each version. The model supports integration with existing databases, by allowing normal classes among the temporal versioned classes. Finally, an approach to its implementation on top of a commercial database within an integrated environment is presented.
DEXA 2002
M.M.Moro, A.P. Zaupa, N.Edelweiss, C.S. dos Santos. TVQL - Temporal Versioned Query Language. In: DEXA 2002 - 13th International Conference on Database and Expert Systems Applications, September 2002, Aix en Provence, France, LNCS 2453, pp.618-627.
ABSTRACT: The Temporal Versions Model (TVM) is an Object Oriented Data Model developed to store the object versions and, for each version, the history of its dynamic attributes and relationships values. In this work, we propose a query language for this model. The language, called Temporal Versioned Query Language - TVQL, is based on SQL, adding new features to recover temporal information and versions. An alternative to its implementation on top of a commercial database is also presented.
CTD 2002
M.M. Moro, N. Edelweiss, C.S. dos Santos.Temporal Versions Model.In: XV Workshop de Teses e Dissertações - CTD 2002,XXII Congresso da Sociedade Brasileira de Computação -SBC, July 2002, Florianópolis, Brazil, v.3. p.33-37. Honorable Mention.
ABSTRACT: This work presents an alternative for the union of temporal data and a version model. The result, the Temporal Versions Model, is able to store object versions and, for each version, the history of its dynamic properties and relationships values. TVM is ideal for modeling time-evolving systems that need to manage design alternatives as versions. An interface for modeling TVM classes is also presented.
DEXA 2001
M.M. Moro, S.M. Saggiorato, N. Edelweiss, C.S. dos Santos.Adding Time to an Object-Oriented Versions Model.In: DEXA 2001 - 12th International Conference on Database and Expert Systems Applications,September 2001, Munich, Germany, LNCS 2113, pp. 805-814
ABSTRACT: In this paper, we propose an object-oriented version model which presents temporal concepts to store not only the object lifetime but also the history of dynamic attributes and relationships defined in the versioned objects and versions. One of the main features of our model is the possibility of having two different time orders, branched time for the object and linear time for each version. The model supports integration with existing databases, by allowing the modeling of normal classes among the temporal versioned classes. Finally, an approach to its implementation on top of a commercial database is presented.
SEKE 2001
M.M. Moro, S.M. Saggiorato, N. Edelweiss, C.S. dos Santos.A Temporal Versions Model for Time-Evolving Systems Specification.In: Proceedings of SEKE - 13th International Conference on Software Engineering & Knowledge Engineering,June 2001, Buenos Aires, Argentina. pp.252-259.
( BibTex )
ABSTRACT: In this paper, we propose a temporal extension to an object-oriented versions model. The union of these concepts allows to keep track of data evolution, get time information and manage all data states. The resultant model presents temporal concepts to store not only the object lifetime but also the history of dynamic attributes and relationships defined in the versioned objects and versions. One interesting feature of our model is the possibility of having two different time orders, branched time for the object and linear time for each version. The model still supports integration with existing databases, by allowing the modeling of normal classes among the temporal versioned classes.
IDEAS 2001
C.W.K. Langsh, M.M.Moro, S.C. Bertagnolli, M.Pimenta.Requisitos de Interfaces para Sistemas CríticosIn: Memorias IDEAS - 4th Iberoamerican Workshop on Requirements Engineering and Software Environments,April 2001, Santo Domingo, Costa Rica, pp.360 - 369
ABSTRACT: Usually, requirements fo safety-critical systems address functional aspects and quality factors, basically related to correctness, robustness and performance. On the other hand, interaction requirements specify the operator tasks and concern not only about the system functional requirements but also about the user behavior. In this paper, the need of determining the critical systems interaction requirements is discussed by approaching fundamentals concepts of Requirements Engineering and CHI, with emphasis in some dialogs properties.
SCCC 2000
N.Edelweiss, P. Hübler, M.M. Moro, G. Demartini. A Temporal Database Management System Implemented on top of a Conventional Database. Proceedings of XX International Conference of the SCCC - SCCC'2000,November 2000, Santiago do Chile, Chile, IEEE Press, pp 58-67.
ABSTRACT: Temporal data models have proven to be convenient to specify applications, allowing the representation of the temporal evolution of data. Several temporal data models were proposed in the last 20 years with this purpose, but no commercial implementation of a Temporal Database is still available. This paper presents an Integrated Temporal Database Environment implemented on top of a conventional database. Using this environment, a user can handle the specification, the data definition and the queries as thought the database implements the temporal data model. The environment performs the mapping from the temporal conceptual schema to the correspondent database, and of the queries expressed in the temporal query language to SQL. Data definition is controlled based on the state transition rules of the temporal data model, keeping thus the temporal integrity of the database. The underlying conventional database keeps transparent to the user.
CLEI 2000
M.M. Moro, N. Edelweiss.Interface de Consultas TF-ORM para Bancos de Dados Relacionais.In: XXVI Conferencia Latinoamericana de Informatica - CLEI'2000,September 2000, Mexico City, Mexico.
ABSTRACT: TF-ORM (Temporal Functionality in Objects with Roles Model) is an object oriented temporal data model which uses roles to represent the object behaviors. Because there is no DBMS for TF-ORM, it was necessary to map the model to a relational database. Once this mapping is implemented, the next step is to allow the TF-ORM queries run on the database. To do this, the query must be mapped to SQL (Structured Query Language). The objective of this paper is to present an interface for executing TF-ORM queries on a relational database. The TF-ORM query can be read from a text file or written by the user through the interface. The result is the correspondent SQL query, which is recorded and presented to the user. After the mapping, the interface allows the execution of the query on the database.
CTIC 2000
M.M. Moro, N. Edelweiss.Interface de Consulta TF-ORM para Oracle.In: XIX Concurso de Trabalhos de Iniciação - CTIC'2000, Curitiba.Anais do XX Congresso da Sociedade Brasileira de Computação,July 2000, Curitiba, Brazil, pp.50.
ABSTRACT: TF-ORM (Temporal Functionality in Objects with Roles Model) is an object oriented temporal data model which uses role concepts to represent the different object behaviors. Because there is no DBMS for TF-ORM, we have chosen to make the mapping from the model to a relational database. Once this mapping is implemented, the next step is to allow the user to perform TF-ORM queries on the database. As the model, the query also needs to be mapped to a language which allows its running on a relational database, in this case to SQL (Structured Query Language). The objective of this work is to present an interface to the TF-ORM query mapping to an Oracle database. The TF-ORM query can be read from a text file or written by the user through the interface. The result is the correspondent SQL query which is recorded to another file and presented to the user. After the mapping is completed, the interface allows to execute the query on the chosen database.