Accessing the Open Data Hub as part of this section.
This is the website of the Open Data Hub documentation, a collection of technical resources about the Open Data Hub project. The website serves as the main resource portal for everyone interested in accessing the data or deploying apps based on datasets & APIs provided by the Open Data Hub team.
The technical stuff is composed of:
Catalogue of available datasets.
How-tos, FAQs, and various tips and tricks for users.
Links to the full API documentation.
Resources for developers.
For non-technical information about the Open Data Hub project, please point your browser to https://opendatahub.com/.
The Open Data Hub project envisions the development and set up of a portal whose primary purpose is to offer a single access point to all (Open) Data from the region of South Tyrol, Italy, that are relevant for the economy sector and its actors.
The availability of Open Data from a single source will allow everybody to utilise the Data in several ways:
Digital communication channels. Data are retrieved from the Open Data Hub and used to provide informative services, like newsletters containing weather forecasting, or used in hotels to promote events taking place in the surroundings, along with additional information like seat availability, description, how to access each event, and so on and so forth.
Applications for any devices, built on top of the data, that can be either a PoC to explore new means or new fields in which to use Open Data Hub data, or novel and innovative services or software products built on top of the data.
Internet portals and websites. Data are retrieved from the Open Data Hub and visualised within graphical charts, graphs, or maps.
There are many services and software that rely on Open Data Hub's Data, which are listed in the Apps Built From Open Data Hub Datasets section, grouped according to their maturity: production stage, beta and alpha stage.
Figure 1 gives a high level overview of the flow of data within the Open Data Hub: at the bottom, sensors gather data from various domains, which are fed to the Open Data Hub Big Data infrastructure and made available through endpoints to (third-party) applications, web sites, and vocal assistants. A more technical and in-depth overview can be found in Section Open Data Hub Architecture.
All the data within the Open Data Hub will be easily accessible, preferring open interfaces and APIs which are built on existing standards like The Open Travel Alliance (OTA), The General Transit Feed Specification (GTFS), Alpinebits.
The Open Data Hub team also strives to keep all data regularly updated, and use standard exchange formats for them like Json and the Data Catalog Vocabulary (DCAT) to facilitate their spreading and use. Depending on the development of the project and the interest of users, more standards and data formats might be supported in the future.
The Open Data Hub team welcomes new partners and user to join the project. For this reason, a number of services are offered to help you getting in touch with the project and support you in case you want to either collaborate with the project or simply use the data made available.
The Open Data Hub team envisioned different possibilities to join the project, that are detailed in the remainder of this section, including–but not limited to–reporting bugs in the API or errors in the API output, making feature requests or suggestions for improvement, and even to participate in the development or supply new data to be shared through the Open Data Hub.
Figure 2 shows at a glance the services offered by the Open Data Hub together with the various roles that potential users can play within the Open Data Hub, according to their interest, expertise, skills, and knowledge.
The Open Data Hub team helps you in various activities: give visibility to your data, identify both suitable, common formats, processing algorithms and licenses to share your data, and to use popular technologies known by application developers.
The Open Data Hub team supports software development companies to access to real-time data, by making available to everyone a stable communication channel which uses a machine-readable protocol and an Open Data license, to ensure everybody can easily find all the available data.
Demo App Development
The Open Data Hub team collaborates with companies to design and develop Proof of Concepts and demo software, released under an Open Source License that use data offered by the Open Data Hub. All the software can then either be used as-is–to show and test its potentials, or as a guideline and inspiration to develop new digital products.
Besides the tasks that you find below, you can also help the Open Data Hub project grow and improve by reporting bugs or asking for new features, by following the directions presented in the dedicated section below.
How to Collaborate¶
This section gives an overview of which tasks you can play when collaborating with the Open Data Hub project.
Depending on your interest on the Open Data Hub Project, we welcome your participation to the project in one of the roles that we have envisioned: User (Data Consumer), App developer, Core Hacker, and Data Provider. Below you can find out a list of tasks that belong to each of these roles; we believe that this list can help you understand which type of contribution you can give to the Open Data Hub project!
As a User I can…
- …install and use an app built on top of the API.
Browse the list of available applications developed by third-parties that use Open Data Hub data, choose one that you are interested in, install it and try it out, then send feedback to their developers if you feel something is wrong or missing.
- …explore the data in the datasets.
Choose a dataset from the list of Domains and Datasets and start gathering data from it, by using the documentation provided in this site. You can then provide any kind of feedback on the dataset: reports about any malfunctions, suggestions for improvements or new features, and so on.
Moreover, if you are interested in datasets that are not yet in our collection, get in touch with the Open Data Hub team to discuss your request.
As a Data Provider I can….
- …provide Open Data to the Open Data Hub project.
Share with an Open Data Licence (like e.g., or ) the data you own, that can prove interesting for the Open Data Hub, for example because they complement existing data in the Open Data Hub or they pertain to an area which is not yet covered. Let your Open Data be freely used by App Developers in their applications.
A Data Provider is an entity (be it a private company, a public institution, or a citizen) that gathers data on a regular basis from various sensors or devices and stores them in some kind of machine-readable format.
As an App Developer I can…
- …harvest data exposed by the dataset.
Browse the list of Domains and Datasets to see what types of data are contained in the datasets, and think how they can be used.
For this purpose, we maintain an updated list of the available datasets with links to the API to access them.
- …build an application with the data.
Write code for an app that combines the data you can harvest from the available datasets in various, novel way.
To reach this goal, you need to access the APIs, their documentation, and the datasets. It is then your task to discover how you can reuse the data in your code.
- …integrate Open Data Hub data using Web Components.
The Open Data Hub team and their partner have developed a small library of Web Components that can be integrated in existing web sites or used as guidance to develop new Web Components.
- …publish my app in Open Data Hub.
As soon as you have developed a stable version of your app, get in touch with us: We plan to maintain an updated list of apps based on our dataset included with this documentation.
No software installation is needed: Go to the list of available applications developed by third-parties that use Open Data Hub data, to the API documentation of each Dataset and start from there, and develop in a language of your choice an application that uses our data.
As a Open Data Hub Core Hacker I can…
- …help shape the future of Open Data Hub.
Participate in the development of Open Data Hub: Build new data collectors, extend the functionality of the broker, integrate new datasets on the existing infrastructure, develop new stable API versions.
To be able to become a core hacker, however, requires a few additional tasks to be carried out:
Learn how to successfully integrate your code with the existing code-base and how to interact with the Open Data Hub team. In other words, you need to read and accept the Guidelines for Developers (click on the link for a summary), which are available in two extended, separate parts: Platform Documentation and Database Documentation.
Understand the Open Data Hub Architecture.
Learn about the Development, Testing, and Production Environments.
Install the necessary software on your local workstation (be it a physical workstation, a virtual machine, or a Docker instance), including PostgreSQL with postgis extension, JDK, git.
Set up all the services needed (database, application server, and so on).
Clone our git repositories. To successfully complete these tasks, please read the How to set up your local Development Environment? tutorial, which guides you stepwise through all the required set up and configuration, along with some troubleshooting advice.
Coding. That’s the funniest part, enjoy!
To support the installation tasks and ease the set up of your workstation, we are developing a script the you will do the job for you. Stay tuned for updates.
Bug Reporting and Feature Requests¶
This section explains what to do in case you:
have found an error or a bug in the APIs;
like to suggest or require some enhancement for the APIs;
have some requests about the datasets
find typos or any error in this documentation repository;
have an idea for some specific tutorial.
If your feedback is related to the Open Data Hub Core, including technical bugs or suggestions as well as requests about datasets (i.e. points 1. to 3. above), please insert your issues on the following website:
If your feedback is related to the Open Data Hub Documentation, please send an email to : our Customer Service will take charge of it.
However, if you feel confident with GitHub, you can insert a new issue using either of the templates
You need to have a valid github account to report issues and interact with the Open Data Hub team.
We keep track of your reports in our bug trackers, where you can also follow progress and comments of the Open Data Hub team members.
Accessing the Open Data Hub¶
There are lots of alternatives to access the Open Data Hub and its data: interactive and non-interactive, using a browser or the command line.
Various dedicated tutorials are available in the List of HOWTOs section to help you getting started, while in section Getting Involved you can find additional ways to collaborate with the Open Data Hub Team and use the data.
Accessing data in the Open Data Hub by using a browser is useful on different levels: for the casual user, who can have a look at the type and quality of data provided; for a developer, that can use the REST API implemented by the Open Data Hub or even check if the results of his app are coherent with those retrieved with the API; for everyone in order to get acquainted with the various methods to retrieve data.
Besides the online tools developed by the Open Data Hub and described in section Quickstart, these other resources can be access using a browser.
Go to the Apps Built From Open Data Hub Datasets section of the documentation, particularly sub-sections Production Stage Apps and Beta Stage Apps, and choose one of the web sites and portals that are listed there. Each of them uses the data gathered from one or more Open Data Hub‘s datasets to display a number of useful information. You can then see how data are exposed and browse them.
In the same Apps Built From Open Data Hub Datasets section, you can also check the list of the Alpha Stage Apps and choose one of them that you think you can expand, then get in touch with the authors to suggest additional features or collaborate with them to discuss its further development to improve it.
Open the Open Data Hub Knowledge Graph Portal where you can explore all the data that are already available as a virtual knowledge graph. Here you can check out some of the precooked query to see and modify them to suit your needs with the help of W3C’s SPARQL query language; SPARQL can be used also in the Playground to freely query the endpoint.
Programmatic and non-interactive access to the Open Data Hub’s dataset is possible using any of the following methods made available by the Open Data Hub team.
The AlpineBits Alliance strives to develop and to spread a standard format to exchange tourism data. Open Data Hub allows access to all data produced by AlpineBits using dedicated endpoints:
The AlpineBits HotelData dataset can be access from https://alpinebits.opendatahub.com/AlpineBits/.
The dedicated howto How to access Open Data Hub AlpineBits Server as a client
The AlpineBits DestinationData endpoint is available at https://destinationdata.alpinebits.opendatahub.com/.
Statistical Access with R
R is a free software for statistical analysis that creates also graphics from the gathered data.
The Open Data Hub Team has developed and made available bzar, an R package that can be used to access BZ Analytics data (see also How to Access Analytics Data in the Mobility Domain) and process them using all the R capabilities. Download and installation instructions, along with example usage can be found on the bzar repository.
There is a howto that explains how to fetch data from the Open Data Hub datasets using R and SPARQL: How to Access Open Data Hub Data With R and SPARQL.
The APIs are composed of a few generic methods, that can be combined with many parameters to retrieve only the relevant data and then post-processed in the preferred way.
The following table summarises how the two versions of the API can be used within the Open Data Hub’s domains.
There are currently two versions of the API, v1 and v2, with the former now deprecated for the Mobility domain and marked as such deprecated throughout the Open Data Hub documentation. New users are recommended to use the new API v2, while users of the API v1 are encouraged to plan a migration to the new API.
The new API v2 has a different approach compared to the previous version, and therefore is not compatible with the API v1, the main difference being that all data stored in the Open Data Hub can now be retrieved from a single endpoint, while with API v1 there was an endpoint for each dataset.
This change in approach requires also a breaking change for the users
of API v1. The initial step, indeed, will not be to open the URL of
the dataset and start exploring, but to retrieve the
stationTypes and then retrieve additional data about each
stationType is the main object of a datasets,
about which all the information in a dataset relate to; a dataset
includes at least one
stationType. A new, dedicated howto
describing in detail the new API v2 and a few basic examples is
already available in the dedicated
section of this documentation.
It is important to remark that the API v2 is only available for datasets in the Mobility Domain.
Unlike browser access, that provides an interactive access to data, with the option to incrementally refine a query, command line access proves useful for non-interactive, one-directional, and quick data retrieval in a number of scenarios, including:
Scripting, data manipulation and interpolation, to be used in statistical analysis.
Applications that gather data and present them to the end users.
Automatic updates to third-parties websites or kiosk-systems like e.g., in the hall of hotels.
Command line access to the data is usually carried out with the curl Linux utility, which is used to retrieve information in a non-interactive way from a remote site and can be used with a variety of options and can save the contents it downloads, which can them be send to other applications and manipulated.
The number of options required by curl to retrieve data from Open Data Hub’s dataset is limited, usually they are not more than 3 or 4, but their syntax and content might become long and not easily readable by a human, due to the number of filters available. For example, to retrieve the list of all points of interests in South Tyrol, the following command should be used:
~$ curl -X GET "https://tourism.api.opendatahub.com/v1/ODHActivityPoi?pagenumber=1&pagesize=10&type=63&subtype=null&poitype=null&idlist=null&locfilter=null&langfilter=null&areafilter=null&highlight=null&source=null&odhtagfilter=null&odhactive=null&active=null&seed=null&latitude=null&longitude=null&radius=null" -H "accept: application/json"
Your best opportunity to learn about the correct syntax and parameters to use is to go to the swagger interface of the tourism or mobility domains and execute a query: with the output, also the corresponding curl command used to retrieve the data will be shown.
The Open Data Hub Virtual Knowledge Graph¶
Some datasets in the Open Data Hub, namely Accommodations, Gastronomy, and Events , are organised into a Virtual Knowledge Graph that can be accessed using SPARQL from the dedicated SPARQL portal. In order to define more precise queries, this section describes the Knowledge Models (KM) underlying these datasets; the description of each KM is accompanied by an UML diagram which shows the KM at a glance.
Besides standard W3C’s OWL and RDF vocabularies, the Open Data Hub VKG uses:
schema.org for most of the entities used
geosparql for geo-references and coordinates of objects
Dublin Core’s purl for linking to related resources
Diagrams use UML class diagram formalism widely adopted in Knowledge Representation and in particular in the W3C’s Recommendation documents for the Semantic Web. The following additional notation applies:
The default prefix used for classes and properties is https://schema.org/. This means that, unless differently stated, the definition of classes and properties, including their attributes, rely on a common standard as defined in schema.org’s vocabulary. As examples, see the LodgingBusiness class and the containedInPlace property.
Other prefixes are explicitly pre-pended to the Class or Property name, like e.g., noi:numberOfUnits.
Arrows with a white tip denote a sub-class relationship, while black tips denote object properties.
Cardinality of 1 is usually not shown, but implied; the look across notation is used. For example, the image on the right-hand side–excerpt from the event dataset VKG–can be read as 0 to N MeetingRooms are ContainedInPlace Place.
The entire mobility domain has a unique underlying knowledge model, which encompasses all the datasets and therefore also allows an easier creation of cross-dataset queries. Since the mobility domain gathers data from sensors, useful in this domain is also the SOSA ontology, which uses sosa as prefix. You can check the Classes and Properties of SOSA in the W3C’s dedicated wiki page
The central concept is Station, of which all
StationTypes are subclass, while
Observation, LatestObservation, and
ObservableProperty are used to provide time-related
information of the data gathered and relate to
Sensor. Together with Platform,
Sensor make the relation between a Station and its
Sensors: For example, sensor EChargingPlug
EChargingstation Platform, which is also a Station.
The knowledge model is completed by the Feature superconcept, which contains also Municipality and RoadSegment, the latter defined by an hasOriginStation and an hasDestinationStation.
Central class in this dataset is LodgingBusiness, to which belong multiple Accommodations.
A LodgingBusiness has as attributes geo:asWKT, email, name, telephone, and faxNumber and relations
address to class PostalAddress, which consists of streetAddress, postalCode, and AddressLocality
geo, i.e., a geographical location, to class GeoCoordinates, consisting of latitude, longitude, and elevation
There are (sub-)types of LodgingBusiness–called Campground, Hotel, Hostel, and BedAndBreakfast–sharing its attributes and relations.
An Accommodation is identified by a name and a noi:numberOfUnits and has relations
containedInPlace to LodgingBusiness (multiple Accommodations can belong to it)
occupancy to QuantitativeValue, which gives the maxValue and minValues of available units of accommodation and a unitCode.
The main class of this dataset is FoodEstablishment, described by geo:asWKT, description, name, telephone, and url.
A FoodEstablishment has
a PostalAddress–consisting of streetAddress, postalCode, and AddressLocality–as address
a GeoCoordinates–latitude, longitude, and elevation–as a geographical location geo
There are different (sub-)types of FoodEstablishment, all sharing the same attributes: Restaurant, FastFoodRestaurant, BarOrPub, Winery, and IceCreamShop.
The main classe in this dataset is Event, described by a startDate, an endDate, and a description. Every Event has an organizer, either a Person or an Organization and a location.
A Person–identified by givenName, familyName, email, and telephone–worksFor an Organization, which has a name and an address, i.e., a PostalAddress consisting of streetAddress, postalCode, AddressLocality, and AddressCountry.
Finally, an Event has as location a MeetingRoom–identified by a name– which is containedInPlace a Place–which has also a name
The SPARQL howto, which guides you in interacting with the SPARQL endpoint.
W3C Recommendation for OWL2 and RDF.
Official Specification of UML Infrastructure are available from Object management group
The AlpineBits Client¶
The AlpineBits Alliance strives to develop and to spread a standard format to exchange tourism data. There are two datasets they developed and keep up to date, that are of particular interest for Open Data Hub users: HotelData and DestinationData.
The AlpineBits DestinationData is a standardisation effort to allow the exchange of information related to mountains, events, tourism. Developed by the AlpineBits Alliance, Destination Data relies on a number of standards (Including json, REST API, Schema.org, OntoUML) to build the AlpineBits DestinationData Ontology, the core result of the effort. The goal of DestinationData it to provide a means to describe events, their location, and additional information on them. For this purpose, the DestinationData Ontology specifies a number of Named Entities used to describe Events and Event Series, Mountain Areas, Places, Trails, Agents, and so on.
The full specification of the ontology, including architecture of the API and description of the datatypes defines can be found in the latest 2021-04 version of the official Destination Data specs (pdf ).
The reference implementation of AlpineBits DestinationData is provided by Open Data Hub and publicly available at the dedicated endpoint at https://destinationdata.alpinebits.opendatahub.com/.
More information and resources about AlpineBits DestinationData can be found on the official page.
The AlpineBits HotelData is meant for data strictly related to hotels and booking, like Inventory Basic, Inventory HotelInfo, and FreeRooms. This dataset can be access from the dedicated endpoint at https://alpinebits.opendatahub.com/AlpineBits/
The dedicated howto How to access Open Data Hub AlpineBits Server as a client.
The official page of AlpineBits HotelData.