Many research initiatives and projects who are at the crossroads of social innovation and web intelligence act so far in a dispersed manner and shall be given the opportunity to share knowledge and experiences in this workshop. Several social innovation issues demand novel solutions and new views for resolving environmental, human rights, social conflicts and many other issues.
SIWEB Workshop is driven by social innovation projects like the EU projects under the umbrella of Collective Awareness Platforms which revalue entities and materials, use open platforms environments while challenging Web intelligence and cutting edge technologies (such as blockchains, crowdsourcing, gamification, etc). Such novel approaches can improve social innovation momentum with new forms for living, socializing, business uptaking, marketing, and several disruptive tasks. The Workshop targets work from academia, communities, or business sides with emphasis on how Web intelligent solutions can propose new and unconventional solutions for social good, innovation, and societal impact.
SIWEB Workshop aims to bring together research initiatives, stakeholders, academia, business vendors, innovators, and projects who are at the edge of social innovation and Web intelligence. The presented papers should focus on those initiatives which strive for stimulating, setting up and sustaining innovation systems which support multiple actors such as citizens, communities, inventors, innovators, entrepreneurs, or public institutions in co-creating and strengthening societal and circular economy actions in-line with digital social innovation principles.
Institute of Computer Science at University of Tartu, Estonia
Prof. Jaak Vilo heads the Data Science chair and the Institute of Computer Science at University of Tartu, Estonia and leads the health data analytics of STACC, a public-private research organisation in Estonia. He earned his PhD in Computer Science at University of Helsinki, Finland. In 1999-2002 he worked at the European Bioinformatics Institute, UK as one of the pioneers in early gene expression microarray data analytics. There he developed the Expression Profiler toolset for various biological data analysis tasks. In 2002, after 12 years abroad, he moved back to Estonia to help creating the Estonian Biobank in PPP partnership with VC investments as director of informatics of EGeen Ltd. He also started his own research group BIIT at University of Tartu, now about 20-people strong. His group applies data analysis, machine learning and algorithmic techniques to a broad range of biological and health data and applications. Linking genomics and many other omics data and health records is a key to developing methods for personalisation of medicine. Medical data, lab measurements, pharmacogenetics and overall multi-genic disease risk scores are complicated to handle due organisational and national barriers, yet international research would benefit greatly from opening up and sharing such data and research results. Prof. Vilo is a head of ELIXIR-Estonia node of the pan-European biological data infrastructure whose mission is to facilitate global data re-use.
Cyber Security is an area of growth with employment opportunities abound. Many universities in Australia and overseas have started offering niche cybersecurity programs. Along with the same note, there is a growth in the number of research students in Cyber Security, indicating demand for this upcoming domain. Recent trends indicate that machine learning is a key aspect of Cyber Security due to the volume of information crossing global networks, and the individual data associated with this information. In this workshop, we will be discussing the modern machine learning algorithm development, implementation, and utilization in business scenarios specific to Cyber Security.
This workshop addresses an interdisciplinary research field involving Web Intelligence, Security Informatics, Big Data Analytics, Deep Learning/Machine Learning, and Cybersecurity and aims to investigate the deliberate misuse of technical infrastructure for subversive purposes, including (but not limited to): the spreading of extremist propaganda, antagonistic or hateful commentary; the distribution of malware; online fraud and identity theft; denial of service attacks; etc. A better understanding of such phenomena on the Web (including social media) allows for their early detection and underpins the development of effective models for predicting cybersecurity threats.
The proposed workshop aims to setup an event that will focus web intelligence for smart cities by bringing together researchers and practitioners in the fields of smart city and artificial intelligence (AI), especially web-intelligence. Specifically, the focus is on the emerging role of intelligence that transforms the smart services to adaptive and self-evolving ones, which deal with the usual smart city challenges like local growth, quality of life’s improvement, efficiency and climate change, etc.
Ontology engineering is a subfield of artificial intelligence and computer science, which aims at a structured representation of terms and relationship between the terms within a particular domain, with the purpose to facilitate knowledge sharing and knowledge reuse. Ontology project involves the development of Ontology building programs, Ontology life-cycle management, the research of Ontology building methods, support tools, and ontology languages, and a series of similar activities. Ontologies have found important applications in information sharing, system integration, knowledge-based software development, and many other issues in the software industry.
However, ontology engineering is a time-consuming and painstaking endeavor, and NLP technology has important contributions to make it quick and automatic development of ontologies. This workshop will focus on the recent advances made in Ontology engineering and NLP, with the aim to promote the interaction between and common growth of the two areas. We are particularly interested in the building of upper-level language ontology in NLP and the application of NLP technology in Ontology engineering.
More importantly, we expect that individuals and research institutions in the areas of both Ontology Engineering and NLP could pay attention to this workshop, which may contribute to the integration and growth of these two areas.
Social media allows users to connect, collaborate, and debate on any topic. The result is a huge volume of user-generated content, including healthcare information that, if properly mined and analyzed, could help the public and private healthcare sectors improve the quality of their products and services while reducing costs.
In the public health area, especially, the physician could take a great advantage since the available huge data can be gathered faster and at a lower cost, compared to the traditional sources, mainly surveys. In fact, the pervasiveness and crowdsourcing power of social media data allow modeling phenomena that were not possible before because either too expensive or outright impossible to answer, such as the distribution of health information in a population, tracking health information trends over time and identifying gaps between health information supply and demand. Although most individual social media posts and messages contain little informational value, aggregation of millions of such messages can generate important knowledge.
Recently, social network data have been explored to monitor and analyze health issues with applications in disease surveillance and epidemiological studies. By far the first and most common healthcare application in social media is influenza. Seminal works have shown that the tweets can be used to track and predict influenza and detect depression. To this purpose, a variety of techniques have been proposed: starting from capturing the overall trend of a particular disease outbreak by monitoring social media, many other approaches appeared such as the ones based on linear regression, supervised machine learning and social network analysis. Other than influenza surveillance, other topics have started to be addressed, including, pharmacovigilance, user behavioral patterns, drug abuse, depression, well-being, assisted living and tracking infectious/viral disease spread.
The ever-increasing demand for online content moderation and user profiling sees the adaptation of Web Intelligence concepts that were developed in good faith, into a censorship and surveillance apparatus owned by corporations and national agencies. Consequently, users are bound to an Orwellian Internet where mainstream platforms — such as search engines, social media, and content providers — place the blame for filter bubbles and extensive user behavioral analysis on Artificial Intelligence, since it is particularly difficult to detect bias, deliberate human intervention, or that inferences have unreportedly been made for purposes other than platform personalization. TICS aims to explore the technological, socio-economic, and legal means and driving forces behind these issues, and to propose alternative directions for building a semantic and human-centric Web that promotes netizen freedom.
Integrated Social Customer Relationship Management (Social CRM) is an emerging concept that includes strategies, processes, and technologies that use social media in CRM. Approaches from the field of web intelligence are important to transform the large mass of data available on social media into value-adding opportunities for companies. Today, a variety of software applications based on web and text-mining techniques is used for this task. However, these tools often fall short in identifying complex patterns (e.g. semantic information, intentions). Advanced techniques, such as semantic business intelligence (SBI) or computational intelligence (CI), promise a great potential to improve the capabilities in knowledge discovery and may also enable new usage scenarios in Social CRM (e.g. network analysis, topic recognition, trend prediction) in various domains (e.g. tourism, banking, energy, public sector, publishing, health, logistics, education). However, their current application in commercial tools and real-world scenarios seems limited not only because of missing expertise, but also because of aspects such as ease-of-use, configuration costs or availability of required resources. The workshop aims to shed light on current research efforts from a technical and economical perspective targeting the development and implementation of innovative tools and methods for intelligent data analysis in Social CRM, resulting in new (integrated) processes and capabilities.
Contemporary computational sciences give important impacts on wide aspects of social sciences. Simulation technologies or abilities to calculate complex systems social scientists want to deal with are exponentially expanding, and thus more complex and more real systems could be a target. The so-called Big Data analysis allows us to quantify human behavior and social phenomena at a fine-grained level, yet it is global in scale, thereby complementing experimental data and theoretical and computational simulation results.
New real-world applications of data mining and machine learning have shown that popular methods may appear to be too simple and restrictive. Mining more complex, larger and generally speaking “more difficult” datasets pose new challenges for researchers and ask for novel and more complex approaches. We organize this workshop where we want to promote research and discussion on more complex and advanced methods for the particularly demanding data and web mining problems. Although we welcome submissions concerning methods based on different principles, we would like also to see among the new research on using optimization techniques. The new data and web mining problems are definitely more complex than traditional ones and they could result in more difficult non-convex optimization formulations. We would like to focus on the interest of the data mining community on various challenging issues which come up while using complex methods to deal with the difficult data mining problems.
Director of the Institute for the FutureUniversity of Nicosia, Cyprus
What’s next for blockchain research? From M2M commerce to self-sovereign identities for machines
The first wave of blockchain research, innovation and implementation has been under way for almost ten years now. Distributed ledgers have created new paradigms for disintermediated value exchange and, in the process, have given rise to (sometimes irrationally inflated) expectations about their potential impact to economy and society.
Today, as we move toward a more in-depth appreciation of blockchain capabilities and limits, new research challenges arise that will demand the attention of the research community, as well as industrial practice, in coming years. In this talk, I will go through three such challenges, discussing ways in which they might influence our future research and technology development agendas:
Professor George M. Giaglis is Director of the Institute for the Future at the University of Nicosia, as well as a leading expert on blockchain technology and applications and advisor to many blockchain projects and technology start-ups. Prior to joining UNIC, he was Professor at the Athens University of Economics and Business (2002-2017), where he also served as Vice Rector (2011-2015). George has been working on digital currencies and blockchain since 2012, with his main focus being on new forms of industrial organization (programmable smart contracts, decentralized applications and distributed autonomous organizations) and new forms of corporate financing (token economy, crypto-economics and ICOs). He has been one of the first academics to research and teach on blockchain, having: designed the curriculum of the world’s first full academic degree on blockchain (MSc in Digital Currency at the University of Nicosia); led the development of blockchain credentialing technology that has resulted in the first ever publishing of academic certificates on the blockchain; taught on the disruptive innovation potential of blockchain, both at academic programs and in executive seminars worldwide; organized a number of prominent blockchain conferences and events, including Decentralized. Throughout his career, he has published more than 10 books and 150 articles in leading scientific journals and conferences, while he is frequently interviewed by media and invited as keynote speaker or trainer in events across the globe. He is the Chief Editor for Blockchain Technology at the Frontiers in Blockchain Journal and member of the Editorial Board at Ledger.
Professor of Artificial Intelligence and Information RetrievalUniversity of Amsterdam, The Netherlands
Maarten de Rijke is University Professor of Artificial Intelligence and Information Retrieval at the University of Amsterdam. He holds MSc degrees in Philosophy and Mathematics (both cum laude), and a PhD in Theoretical Computer Science. He worked as a postdoc at CWI, before becoming a Warwick Research Fellow at the University of Warwick, UK. He joined the University of Amsterdam in 1998, and was appointed full professor in 2004. He is a member of the Royal Netherlands Academy of Arts and Sciences (KNAW) and a recipient of a Pioneer Personal Innovation grant, the Tony Kent Strix Award, the Bloomberg Data Science Research Award, the Criteo Faculty Research Award, the Google Faculty Research Award, the Microsoft PhD Research Fellowship Award, and the Yahoo Faculty and Research Engagement Program Award as well as a large number of NWO grants. He is the director of the newly established Innovation Center for Artificial Intelligence and a former director of Amsterdam Data Science.
De Rijke leads the Information and Language Processing Systems group at the Informatics Institute of the University of Amsterdam, one of the world’s leading academic research groups in information retrieval. His research focus is at the interface of information retrieval and artificial intelligence, with projects on online and offline learning to rank, on recommender systems, and on conversational search.
A Pionier personal innovational research incentives grant laureate (comparable to an advanced ERC grant), De Rijke has helped to generate over 65MEuro in project funding. With an h-index of 69 he has published over 750 papers, published or edited over a dozen books, is editor-in-chief of ACM Transactions on Information Systems, co-editor-in-chief of Foundations and Trends in Information Retrieval and of Springer’s Information Retrieval book series, (associate) editor for various journals and book series, and a current and former coordinator of retrieval evaluation tracks at TREC, CLEF and INEX. Recently, he was co-chair for SIGIR 2013, general co-chair for ECIR 2014, WSDM 2017, and ICTIR 2017, co-chair “web search systems and applications” for WWW 2015, short paper co-chair for SIGIR 2015, and program co-chair for information retrieval for CIKM 2015.
The retrieval and language technology developed by De Rijke’s research group is being used by organizations around the Netherlands and beyond, and has given rise to various spin-off initiatives.
Prof. of Databases and Information Systems
Universität Koblenz-Landau, Germany
Steffen is full professor for Databases and Information Systems at the Universität Koblenz-Landau, Germany, and full professor for Web and Computer Science at University of Southampton, UK. He studied in Erlangen (Germany), Philadelphia (USA) and Freiburg (Germany) computer science and computational linguistics. In his research career he has managed to avoid almost all good advice that he now gives to his team members. Such advice includes focusing on research (vs. company) or concentrating on only one or two research areas (vs. considering ontologies, semantic web, social web, data engineering, text mining, peer-to-peer, multimedia, HCI, services, software modelling and programming and some more). Though, actually, improving how we understand and use text and data is a good common denominator for a lot of Steffen’s professional activities.