Active projects and challenges as of 05.12.2024 02:41.
Hide full text Print Download CSV Data Package
Alain und Laura sind neue Medien
a poetic journey
Two comedians take a poetic journey along selected cultural datasets. They uncover trivia, funny and bizarre facts, and give life to data in a most peculiar way. In French, German, English and some Italian.
Data
- http://www.openstreetmap.org
- http://make.opendata.ch/wiki/data:glam_ch#pcp_inventory
- http://make.opendata.ch/wiki/data:glam_ch#carl_durheim_s_police_photographs_of_stateless_persons
- http://make.opendata.ch/wiki/data:glam_ch#swiss_book_2001
Team
- Catherine Pugin
- Laura Chaignat
- Alain Guerry
- Dominique Genoud
- Jérôme Treboux
- Florian Evéquoz
Links
- Tools: Knime, Python, D3, sweat, coffee and salmon pasta
Ancestors on Wikidata
automatic visualisation of family trees
Ancestors gadget for Wikidata
The project was initially created during the 1st Swiss Open Cultural Data Hackathon in 2015.
This simple gadget displays the family tree of a given item from wikidata. It does that by querying Wikidata for the father/mother values. If you want to use it on Wikidata, you can enable it in your user settings (enable "Ancestor").
The tool is available at the following URL: https://tools.wmflabs.org/family/ancestors.php?q=Q7742&lang=en
Parameters
The gadget takes 3 parameters in the URL:
q
(query): this is the root element, it must be a Wikidata ID (e.g. Q7742), default isQ154952
(Willem-Alexander of the Netherlands)lang
(language): the language in which the data should be displayed, default isen
level
: the amount of levels of the family tree, default is5
.
Artmap
puts art.. on a map
Demo: Interactive map on GitHub
Simple webpage created to display the art on the map and allow to click on each individual element and to follow the URL link:
Find art around you on a map!
Data
Team
- odi
- julochrobak
- and other team members
catexport
tool for working with Wikimedia data
Export Wikimedia Commons categories to your local machine with a convenient web user interface.
Try it here: http://catexport.herokuapp.com
Data
- Wikimedia Commons API
Team
- odi
- and other team members
Related
At #glamhack a number of categories were made available offline by loleg using this shell script and magnus-toolserver.
catexport
The aims of this projects are GLAM institutions that provide their data to the Wikimedia Commons projects and want to extract the categorization done by the community in a structured way. This tool uses the API of Wikimedia Commons in the background. It generates CSV with the following format
filename,category
If a file has multiple categories, there will be multiple entries for it. The first column acts as a unique identifier of the file.
Cultural Music Radio
geolocalised mobile app
This is a cultural music radio which plays suitable music depending on your gps location or travel route. Our backend server looks for artists and musicians nearby the user's location and sends back an array of Spotify music tracks which will then be played on the iOS app.
Server Backend
We use a python server backend to process RESTful API requests. Clients can send their gps location and will receive a list of Spotify tracks which have a connection to this location (e.g. the artist / musician come from there).
Javascript Web Frontend
We provide a javascript web fronted so the services can be used from any browser. The gps location is determined via html5 location services.
iOS App
In our iOS App we get the user's location and send it to our python backend. We then receive a list of Spotify tracks which will be then played via Spotify iOS SDK. There is also a map available which shows where the user is currently located and what is around him.
We offer a nicely designed user interface which allows users to quickly switch between music and map view to discover both environment and music!
Data and APIs
MusicBrainz for linking locations to artists and music tracks
Spotify for music streaming in the iOS app (account required)
Team
Culture Radio
This is a cultural music radio which plays suitable music depending on your gps location.
Our backend server looks for artists and musicians nearby the user's location and sends back an array of Spotify music tracks which will then be played on the iOS app.
Wiki
All important facts about this project are dcumented in our wiki.
More information about this project also on http://quappi.com
Diplomatic Documents and Swiss Newspapers in 1914
interlinked & searchable
This project gathers two data sets: Diplomatic Documents of Switzerland and Le Temps Historical Archive for year 1914. Our aim is to find links between the two data sets to allow users to more easily search the corpora together. The project is composed by two parts:
- The Geographical Browser of the corpora. We extract all places from Dodis metadata and all places mentioned in each article of Le Temps, we then match documents and articles that refer to the same places and visualise them on a map for geographical browsing.
- The Text similarity search of the corpora. We train two models on the Le Temps corpus: Term Frequency Inverse Document Frequency and Latent Semantic Indexing with 25 topics. We then develop a web interface for text similarity search over the corpus and test it with Dodis summaries and full text documents.
Data and source code
- Data.zip (CC BY 4.0) (DropBox)
- github
Documentation
In this project, we want to connect newspaper articles from Journal de Genève (a Genevan daily newspaper) and the Gazette de Lausanne to a sample of the Diplomatic Documents in Switzerland database (Dodis). The goal is to conduct requests in the Dodis descriptive metadata to look for what appears in a given interval of time in the press by comparing occurrences from both data sets. Thus, we should be able to examine if and how the written press reflected what was happening at the diplomatic level. The time interval for this project is the summer of 1914.
In this context, at first we cleaned the data, for example by removing noise caused by short strings of characters and stopwords. The cleansing is a necessary step to reduce noise in the corpus. We then compared prepared tfidf vectors of words and LSI topics and represented each article in the corpus as such. Finally, we indexed the whole corpus of Le Temps to prepare it for similarity queries. THe last step was to build an interface to search the corpus by entering a text (e.g., a Dodis summary).
Difficulties were not only technical. For example, the data are massive: we started doing this work on fifteen days, then on three months. Moreover, some Dodis documents were classified (i.e. non public) at the time, therefore some of the decisions don't appear in the newspapers articles. We also used the TXM software, a platform for lexicometry and text statistical analysis, to explore both text corpora (the DODIS metadata and the newspapers) and to detect frequencies of significant words and their presence in both corpora.
Dodis Map
Team
Diplomatic Documents and Swiss Newspapers in 1914
This project gathers two data sets: (Diplomatic Documents of Switzerland)[http://dodis.ch/en/home] and Le Temps Historical Archive for year 1914. Our aim is to find occurrences appearing in both data sets and detect the events they have in common.
More details in the project's wiki
$ pip install flask gensim nltk beautifulsoup4
$ python
>>> import nltk
>>> nltk.download()
>>> l
>>> d stopwords
Notes
flying Eduardo
building a universal template for web apps with “Points of Interest”
My idea was/is to build a universal template for web apps with "Points of Interest". At the same time, I was thinking about OpenLinkedData, data redundancies and things like that. I hope I have someday time to document these results. But the most important thing was: learn how a hackathon works and get a sticker for my laptop.
| DEMO |
Data
- commons.wikimedia.org/wiki/Category:CH-NBphotographsbyEduardSpelterini
- EAD-WEHR Spelterini, Eduard: Sammlung Eduard Spelterini,
Team
- Markus Steiger - 01241.com
Links
- Blog-Post (in german) used tools, usability, concept, links
- Similar applications with this template: hotelterminus | Baden gestern und heute
Graphing the Stateless People in Carl Durheim's Photos
to reconstruct relationships
CH-BAR has contributed 222 photos by Carl Durheim, taken in 1852 and 1853, to Wikimedia Commons. These pictures show people who were in prison in Bern for being transients and vagabonds, what we would call travellers today. Many of them were Yenish. At the time the state was cracking down on people who led a non-settled lifestyle, and the pictures (together with the subjects' aliases, jobs and other information) were intended to help keep track of these "criminals".
Since the photos' metadata includes details of the relationships between the travellers, I want to try graphing their family trees. I also wonder if we can find anything out by comparing the stated places of origin of different family members.
I plan to make a small interactive webapp where users can click through the social graph between the travellers, seeing their pictures and information as they go.
I would also like to find out more about these people from other sources ... of course, since they were officially stateless, they are unlikely to have easily-discoverable certificates of birth, death and marriage.
- Source code: downloads photographs from Wikimedia, parses metadata and creates a Neo4j graph database of people, relationships and places
Data
- Category:Durheim portraits contributed by CH-BAR on Wikimedia
Team
Links
durheim
Graphing relationships of homeless people photographed by Carl Durheim in Bern, 1852-3.
This code was written for the 1st Swiss Open Cultural Data Hackathon, 27-28 February 2015.
The idea of this project is to explore this collection of photographs from the Swiss Archives: Category:Durheim portraits contributed by CH-BAR. Currently, it is possible to download all of the portraits, parse the associated metadata and load it into a Neo4j graph database. The relationships between people and their places of origin are added to the db and can be browsed with the Neo4j browser interface.
Usage
- Download and start Neo4j. Open http://localhost:7474 in a browser.
- In the terminal, enter: python durheim.py
Historical Tarot Freecell
play online with ancient cards
Historical playing cards are witnesses of the past, icons of the social and economic reality of their time. On display in museums or stored in archives, ancient playing cards are no longer what they once were meant to be: a deck of cards made for playful entertainment. This project aims at making historical playing cards playable again by means of the well-known solitaire card game "Freecell".
Tarot Freecell is a fully playable solitaire card game coded in HTML 5. It offers random setup, autoplay, reset and undo options. The game features a historical 78-card deck used for games and divination. The cards were printed in the 1880s by J. Müller & Cie., Schaffhausen, Switzerland.
The cards need to be held upright and use Roman numeral indexing. The lack of modern features like point symmetry and Arabic numerals made the deck increasingly unpopular.
Due to the lack of corner indices - a core feature of modern playing cards - the vertical card offset needs to be significantly higher than in other computer adaptations.
Instructions
Cards are dealt out with their faces up into 8 columns until all 52 cards are dealt. The cards overlap but you can see what cards are lying underneath. On the upper left side there is space for 4 cards as a temporary holding place during the game (i.e. the «free cells»). On the upper right there is space for 4 stacks of ascending cards to begin with the aces of each suit (i.e. the «foundation row»).
Look for the aces of the 4 suits -- swords, sticks, cups and coins. As soon as the aces are free (which means that there are no more cards lying on top of them) they will flip to the foundation row. Play the cards between the columns by creating lines of cards in descending order, alternating between swords/sticks and cups/coins. For example, you can place a swords nine onto a coins ten, or a cups jack onto a sticks queen.
Placing cards onto free cells (1 card per cell only) will give you access to other cards in the columns. Look for the lower numbers of each suit and move cards to gain access to them. You can move entire stacks, the number of cards moveable at a time is limited to the number of free cells (and empty stacks) plus one.
The game strategy comes from moving cards to the foundations as soon as possible. Try to increase the foundations evenly, so you have cards to use in the columns. If «Auto» is switched on, cards no other card can be placed on will automatically flip to the foundations.
You win the game when all 8 columns are sorted in descending order. All remaining cards will then flip to the foundations, from ace to king in each suit.
Updates
- 2015/02/27 v1.0: Basic game engine
- 2015/02/28 v1.1: Help option offering modern suit and value indices in the upper left corner
- 2015/03/21 v1.1: Retina resolution and responsive design
Data
- Wikipedia: Tarot 1JJ
- Wikimedia Commons: Tarot 1JJ card set
Author
- Prof. Thomas Weibel, Thomas Weibel Multi & Media
Historical Views of Zurich Data Upload
into Wikimedia Commons
Preparation of approx. 200 historical photographs and pictures of the Baugeschichtliches Archiv Zurich for upload unto Wikimedia Commons. Metadata enhancing, adding landmarks and photographers as additional category metadata
Link to upload to Wikimedia Commons will follow.
Data
Link to upload will follow
Team
- Micha Rieser
- Reto Wick
- wild
- Marco Sieber
- Martin E. Walder
- and other team members
Lausanne Historic GeoGuesser
game with historic photos and modern maps
A basic GeoGuesser game using pictures of Lausanne from the 19th century. All images are available on http://musees.lausanne.ch/ and are are part of the Musée Historique de Lausanne Collection.
Data
Team
Oldmaps online
republishing historical Swiss maps
screenshot: map from Ryhiner collection in oldmapsonline
Integrate collections of historical Swiss maps into the open platform www.oldmapsonline.org. At least the two collections Rhyiner (UB Bern) and manuscript maps from Zentralbibliothek Zurich. Pilot for georeferencing maps from a library (ZBZ). Second goal: to load old maps into a georefencing system and create a competition for public. For the hackathon maps and metadata will be integrated in the mentioned platform. At the moment the legal status of metadata from Swiss libraries is not yet clear and only a few maps are in public domain (collection Ryhiner at UB Bern). Another goal is to create a register of historical maps from Swiss collections.
Data
- www.zumbo.ch old maps from a private collection (Marcel Zumstein).
- Sammlung Ryhiner (University Library Bern): published in public domain
- Here is the webpage for the georeferencing competition: http://klokan.github.io/openglambern/
- georeferencing tool (by klokan) with random map: http://zumbo.georeferencer.com/random
Team
- Peter Pridal, Günter Hipler, Rudolf Mumenthaler
Links
- Documentation: Google Doc of #glamhack
- Blog or forum posts will follow...
- Tools we used: http://project.oldmapsonline.org/contribute for the metadata scheme (spreadsheet);
Lessons Learnt
We encountered several obstacles:
- Legal aspects: libraries are still reluctant to publish data and even metadata under an open licence. We had the permission by the University Library Berne that published the old maps collection unter public domain. But we were not sure about the use of metadata. Finally we asked for permission and got it by the library's director. Other (or nearly most) works are published under restricted conditions, especially in the platforms e-rara and e-manuscripta.
- Technical aspects: data are usually kept in silos: databases, web services that keep the access to the files closed. It was even not easy to get reasonable thumbnails. ETH library provided an interface for the access to its bibliografic metadata, but the limited access was not enough for our use: some metadata were not included.
Lesson learnt:
- Libraries must declare that their metadata are published under a CC-0 license to make reuse possible and clear. This is important also for other projects (like Swissbib linked).
- Libraries, archives and museums with historical holdings must decide if they want to spread data for a wide usage in order to support cultural and scientific projects. The best framework would be a Open Data Policy for publically financed institutions.
- How can this contribution to the society be measured? Usually libraries deliver statistics for the usage of their materials to their university or administration. So also these administrations have to rethink: not downloads from the library's website or visits in the reading room are relevant, but the contribution to works in science and culture...
- Web services like e-rara.ch, e-manuscripta.ch and others should support open formats and APIs.
OpenGLAM Inventory
database of heritage institutions
Photo: Gfuerst, CC by-sa 3.0, via Wikimedia Commons
Idea: Create a database containing all the heritage institutions in Switzerland. For each heritage institution, all collections are listed. For each collection the degree of digitization and the level of openness are indicated: metadata / content available in digital format? available online? available under an open license? etc. - The challenge is twofold: First, we need to find a technical solution that scales well over time and across national borders. Second, the processes to regularly collect data about the collections and their status need to be set up.
Step 1: Compilation of various existing GLAM databases (done)
National inventories of heritage institutions are created as part of the OpenGLAM Benchmark Survey; there is also a network of national contacts in a series of countries which have been involved in a common data collection exercise.
Step 2: GLAM inventory on Wikipedia (ongoing)
Port the GLAM inventory to Wikipedia, enrich it with links to existing Wikipedia articles: See German Wikipedia|Project "Schweizer Gedächtnisinstitutionen in der Wikipedia". Track the heritage institutions' presence in Wikipedia. Encourage the Wikipedia community to write articles about institutions that haven't been covered it in Wikipedia. Once all existing articles have been repertoried, the inventory can be transposed to Wikidata.
Further steps
- Provide a basic inventory as a Linked Open Data Service
- Create an inventory of collections and their accessibility status
Data
- Swiss GLAM Inventory (Datahub)
Team
- beat_estermann
- various people from the Wikipedia communtiy
- and maybe you?
Links
- Open Data Census: There is an international Open Data Census for Open Government Data. After some discussion, the international OpenGLAM working group has reached the conclusion that the approach used cannot directly be applied to the GLAM sector, as heritage institutions' collections are rather heterogeneous.
- Open Government Vorgehensmodell (KDZ Zentrum für Verwaltungsforschung, Wien)
Picture This
connected frame with historic images
A connected picture frame displaying historic images
Story
- The Picture This “smart” frame shows police photographs of homeless people by Carl Durheim (1810-1890)
- By looking at a picture, you trigger a face detection algorithm to analyse both, you and the homeless person
- The algorithm detects gender, age and mood of the person on the portrait (not always right)
- You, as a spectator, become part of the system / algorithm judging the homeless person
- The person on the picture is at the mercy of the spectator, once again
How it works
- Picture frame has a camera doing face detection for presence detection
- Pictures have been pre-processed using a cloud service
- Detection is still rather slow (should run faster on the Raspi 2)
- Here's a little video https://www.flickr.com/photos/tamberg/16053255113/
Questions (not) answered
- Who were those people? Why were they homeless? What was their crime?
- How would we treat them? How will they / we be treated tomorrow? (by algorithms)
Data
- http://make.opendata.ch/wiki/event:2015-02
- http://make.opendata.ch/wiki/data:glam_ch
- https://commons.wikimedia.org/wiki/Category:Durheim_portraits_contributed_by_CH-BAR
Team
- @ram00n
- @tamberg
- and you
Ideas / Iterations
- Download the pictures to the Raspi and display one of them (warmup)
- Slideshow and turning the images 90° to adapt to the screensize
- Play around with potentiometer and Arduino to bring an analog input onto the Raspi (which only has digital I/O)
- Connect everything and adapt the slideshow speed with the potentiometer
- Display the name (extracted from the filename) below the picture
next steps, more ideas:
- Use the Raspi Cam to detect a visitor in front of the frame and stop the slideshow
- Use the Raspi Cam to take a picture of the face of the visitor
- Detect faces in the camera picture
- Detect faces in the images [DONE, manually, using online service]
- …merge visitor and picture faces
Material
- Raspi https://www.adafruit.com/products/998
- 7inch TFT https://www.adafruit.com/products/947
- Picture frame http://www.thingiverse.com/thing:702589
Software
- http://www.raspberrypi.org/downloads/ - Raspbian
- http://www.pygame.org - A game framework, allows easy display of images
- https://bitbucket.org/tamberg/makeopendata/src/tip/2015/PictureThis - Scraper and Display
- http://skybiometry.com/Demo
- http://qwikfix.co.uk/sky-customer-services/
- http://simplecv.org
Links
- https://pinboard.in/search/u:tamberg?query=glamhack
- http://press1for.co.uk/income-support-contact-number/
- http://www.open-electronics.org/raspberry-pi-and-the-camera-pi-module-face-recognition-tutorial/
- http://www.raspberrypi.org/facial-recognition-opencv-on-the-camera-board/ - OpenCV on Raspi-Cam
- https://speakerdeck.com/player/083e55006e4a013063711231381528f7 - Slide 106 for face replacement
- http://picamera.readthedocs.org/en/release-1.0/
Not used this time, but might be useful
- https://thinkrpi.wordpress.com/2013/05/22/opencv-and-camera-board-csi/ - old Raspian versions did not have the latest OpenCV installed. Nice howto for that
- https://realpython.com/blog/python/face-recognition-with-python/ - OpenCV, nice but not used
- http://docs.opencv.org - not used directly, only via SimpleCV
- http://makezine.com/projects/pi-face-treasure-box/ - maybe a nice weekend project
- https://learn.adafruit.com/creepy-face-tracking-portrait
- http://www.seeedstudio.com/depot/101LCD-Display-1366x768-HDMIVGANTSCPAL-p-1586.html - 10inch TFT
- http://www.aliexpress.com/item/FreeShipping-Banana-Pro-Pi-7-inch-LVDS-LCD-Module-7-Touch-Screen-F-Raspberry-Pi-Car/32246029570.html - 7inch TFT mit LVDS Flex Connector
Portrait Domain
alternate identity social media platform
(Original working title: Portrait Domain)
This is a concept for an gamified social media platform / art installation aiming to explore alternate identity, reflect on the usurping of privacy through facial recognition technology, and make use of historic digitized photographs in the Public Domain to recreate personas from the past. Since the #glamhack event where this project started, we have developed an offline installation which uses Augmented Reality to explore the portraits. See videos on Twitter or Instagram.
View the concept document for a full description.
Data
The exhibit gathers data on user interactions with historical portraits, which is combined with analytics from the web application on the Logentries platform:
Team
Launched by loleg at the hackdays, this project has already had over a dozen collaborators and advisors who have kindly shared time and expertise in support. Thank you all!
Please also see the closely related projects picturethis and graphingthestateless.
Portrait Domain
This is a project demo developed for the OpenGLAM.ch Hackathon in Berne, Switzerland on February 27-28, 2015. For background information see the wiki page (make.opendata.ch).
Stack
Tool | Name | Advantage |
---|---|---|
Server distro | Ubuntu 14.10 x64 | Latest Linux |
WSGI proxy | Gunicorn | Manage workers automatically |
Web proxy | Nginx | Fast and easy to configure |
Framework | Flask | Single file approach for MVC |
Data store | MongoDB | No scheme needed and scalable |
DevOps | Fabric | Agentless and Pythonic |
In addition, a Supervisor running on the server provides a daemon to protect the Gunicorn-Flask process.
Developer setup
Based on the MiniTwit application, which is a prototype of Twitter like multiple-user social network. The original application depends on SQLite. However, we have focused on using MongoDB for this project.
To install, set up a config.py which can be just a blank file on your local machine.
(1) Make sure you have a current version of Python and Virtualenv, as well as XML libraries:
sudo apt-get install python virtualenv
sudo apt-get install libxml2-dev libxslt-dev libz-dev
(2) Set up a virtual environment:
virtualenv .venv
. .venv/bin/activate
pip install -r requirements.txt
(3) Run the server:
python minitwit.py
Deployment
1. Install Fabric and clone the Github repo
The DevOps tool is fabric that is simply based on SSH. The fabfile.py
and the staging flask
files are stored on Github. We should install fabric
and download the fabfile.py on the local machine before the deployment.
bash
sudo pip install fabric
wget https://raw.githubusercontent.com/dapangmao/minitwit-mongo-ubuntu/master/fabfile.py
fab -l
2. Input IP from the virtual machine
A new VM usually emails IP address and the root password. Then we could modify the head part of the fabfile.py
accordingly. There are quite a few cheaper cloud provider for prototyping other than Amazon EC2. For example, a minimal instance from DigitalOcean only costs five dollars a month. If SSH key has been uploaded, the password could be ignored.
env.hosts = ['YOUR IP ADDRESS'] # <--------- Enter the IP address
env.user = 'root'
env.password = 'YOUR PASSWORD' # <--------- Enter the root password
3. Fire up Fabric
Now it is time to formally deploy the application. With the command below, the fabric
will first install pip, git, nginx, gunicorn, supervisor
and the latest MongodB
, and configure them sequentially. In less than 5 minutes, a Flask and MongoDB application will be ready for use. Since DigitalOcean has its own software repository for Ubuntu, and its VMs are on SSD, the deployment is even faster, which is usually finished in one minute.
python
fab deploy_minitwit
Public Domain Game
with Web-linked cards
A card game to discover the public domain. QR codes link physical cards with data and digitized works published online. This project was started at the 2015 Open Cultural Data Hackathon.
Sources
Team
cardgame
Schweizer Kleinmeister
An Unexpected Journey
This project shows a large image collection in an interactive 3D-visualisation. About 2300 prints and drawings from "Schweizer Kleinmeister" from the Gugelmann Collection of the Swiss National Library form a cloud in the virtual space.
The images are grouped according to specific parameters that are automatically calculated by image analysis and based on metadata. The goal is to provide a fast and intuitive access to the entire collection, all at once. And this not accomplished by means of a simple list or slideshow, where items can only linearly be sorted along one axis like time or alphabet. Instead, many more dimensions are taken into account. These dimensions (22 for techniques, 300 for image features or even 2300 for descriptive text analysis) are then projected onto 3D space, while preserving topological neighborhoods in the original space.
The project renounces to come up with a rigid ontology and forcing the items to fit in premade categories. It rather sees clusters emerge from attributes contained in the images and texts themselves. Groupings can be derived but are not dictated.
The user can navigate through the cloud of images. Clicking on one of the images brings up more information about the selected artwork. For the mockup, three different non-linear groupings were prepared. The goal however is to make the clustering and selection dependent on questions posed by any individual user. A kind of personal exhibition is then curated, different for every spectator.
<h3 class=" page-header pb-3 mb-4 mt-5"></h3>
Update: Adoption for Virtual Reality
<h3 class=" page-header pb-3 mb-4 mt-5"></h3>
For more info, see here: http://www.mathiasbernhard.ch/floating-through-an-image-galaxy-in-vr/
(by Mathias on 15|05|29)
Open Data used
Gugelmann Collection, Swiss National Library
http://opendata.admin.ch/en/dataset/sammlung-gugelmann-schweizer-kleinmeister
http://commons.wikimedia.org/wiki/Category:CH-NB-Collection_Gugelmann
Techniques / Libraries
Crawling, extraction, image processing, machine learning:
- Python (BeautifulSoup, sklearn, skimage)
- Java (RegEx)
Places search for Lat/Lng coordinates:
- GoogleMaps API
Visualization:
- Processing.org
Team
- Mathias Bernhard ( @W0RB1T)
- Jorge Orozco ( @j_orozco)
- Nikola Marincic ( @desiringmachine)
- Sonja Gasser ( @sonja_gasser)
Links
http://www.mathiasbernhard.ch/schweizer-kleinmeister-an-unexpected-journey/
Swiss Games Showcase
from a database of the computer game scene
A website made to show off the work of the growing Swiss computer game scene. The basis of the site is a crowdsourced list of Swiss games. This list is parsed, and additional information on each game is automatically gathered. Finally, a static showcase page is generated.
Data
- Source List on Google Docs of Released and Playable Swiss Video Games
Team
Links
The Endless Story
computer generated narration
The Endless Story
This project is being developed as part of the #GLAMhack event at the National Library in Switzerland.
Team
Uses
Deployment
./deploy.sh
Thematizer
enriching cultural information online
Problem:
There are a lot of cultural data (meta-data, texts, videos, photos) available to the community, in Open Data format or not, that are not valued and sleep in data silos. These data could be used, non-exhaustively, in the areas of tourism (services or products creations highlighting the authenticity of the visited area) or in museums (creation of thematic visits based on visitors profiles)
Proposition:
We propose to work on an application able to request different local specialized cultural datasets and make links, through the result, with the huge, global, universal, Wikipedia and Google Map to enrich the cultural information returned to the visitor.
Prototype 1 (Friday):
One HTML page with a search text box and a button. It requests Wikipedia with the value, collect the JSON page's content, parse the Table of Content in order to keep only level 1 headers, display the result in both vertical list and word cloud.
Prototype 2 (Saturday):
One HTML page accessing the dataset from the Mediathèque of Valais (http://xml.memovs.ch/oai/oai2.php?verb=ListIdentifiers&metadataPrefix=oai_dc&set=k), getting all the "qdc" XML pages and displaying them in a vertical list. When you click on one of those topics, on the right of the page you will get some information about the topic, one image (if existing) and a cloud of descriptions. Clicking on one of the descriptions will then request Wikipedia with that value and display the content. If we have enough time we will also get a location tag from the Mediathèque XML file and then display a location on Google Map.
Demo
- http://codepen.io/alogean/full/GgGOvv/
- http://codepen.io/alogean/full/jEKKgZ/
- http://codepen.io/alogean/full/ByVPod/ (last version)
Resources (examples, similar applications, etc.):
This idea is based on a similar approach that has been developed during Museomix 2014 : http://www.museomix.org/prototypes/museochoix/ . We are trying to figure out how to extend this idea to other contexts than the museums and to other datasets than those proposed by that particular museum.
Data
Example of made requests:
- https://fr.wikipedia.org/w/api.php?action=query&prop=extracts&exchars=175&titles=Lausanne&format=json
- https://fr.wikipedia.org/w/api.php?action=mobileview&page=Lausanne§ions=1&format=json
- http://www.followthesteps.net/hsbc-contact-phone-number/
- http://xml.memovs.ch/oai/
Other potentially interesting datasets for future work:
- http://www.vallesiana.ch/#!search
- http://newspaper.archives.rero.ch/Olive/ODE/NVE_FR/
- http://newspaper.archives.rero.ch/olive/ODE/CONF_FR/
- www.valais-wallis-digital.ch
Team
Links
- https://twitter.com/mdammers?lang=fr . Link to the Wikidata expert who gave us some useful links to the Wikipedia API.
- https://pad.okfn.org/p/Lausanne_Museums_GLAMhack . Pad with information about the Museris 3.0 initiative in Lausanne and access to their manual database of meta-data. (Not sure the links will work very long after the hackday...)
ViiSoo
water visualisation and sonification remix
Exploring a water visualisation and sonification remix with Open Data, to make it accessible for people who don't care about data.
Why water? Water is a open accessible element for life. It flows like data and everyone should have access to it.
We demand Open Access to data and water.
Join us, we have stickers.
Open Data Used
- Aerial Photographs by Eduard Spelterini
- Historical Photograph Collection: "Active Service in the World War I (1914-1918)"
- Images from the Zurich Central Library's Special Collections
- Samples from Freesound.org
- Smetana - Die Moldau
Tec / Libraries
- HTML5, Client Side Javascript, CSS and Cowbells
- Flocking JavaScript audio synthesis framework
- Hugged by stackoverflow
Team
Created by the members of Kollektiv Zoll
ToDos
- Flavours (snow, lake, river)
- Image presentation
-
Realtime Input and Processing of Data from an URL
See it live
WikiProject "Cultural heritage"
Swiss monuments in Wikidata
Photo: Newton2, CC by 2.5, via Wikimedia Commons
Update information and review errors regarding Swiss monuments in Wikidata as a part of the WikiProject Built heritage (formerly "Cultural heritage").
Data
- Wikimedia Commons Monuments database: https://commons.wikimedia.org/wiki/Commons:Monuments_database
- Kulturgüterschutz-Inventory: http://datahub.io/dataset/kgs-inventar-inventory-of-the-protection-of-cultural-property-in-switzerland
- Kulturgüterschutz-GIS-Theme: http://s.geo.admin.ch/62fbccb361
Resources
- Connected Open Heritage Project: Wikidata migration
- Example of how to map WLM data
- Wikimania 2016 Workshop: Wikidata and GLAM
Team
Links
- Tasks done: https://www.wikidata.org/w/index.php?namespace=&tagfilter=&target=Property%3AP381&showlinkedto=1&hidebots=&title=Special%3ARecentChangesLinked
- Relevant documentation
- Swiss Monuments in Wikipedia: https://de.wikipedia.org/wiki/Portal:Schweizer_Denkmallisten
- Swiss Monuments in Wikidata: https://www.wikidata.org/wiki/Property_talk:P381
- Errors:
- Todo-List:
Zürich 1799
published to Wikipedia and Commons
Adding the contents of the publication "Zürich 1799: Eine Stadt erlebt den Krieg", published by city of Zurich under a CC-BY-SA-3.0 licence to Wikipedia and the images to Wikimedia Commons.
See: First Draft of Wikipedia Article
Data
- Collection of texts, images, maps and geo data provided by the City of Zurich - Urban Planning Office
Team
- Martin E. Walder
- Micha Rieser
- Reto
- Marco Sieber
- * wild
Challenges
Spock Monroe Art Brut
photo mosaic and tribute
This is a photo mosaic based on street art in SoHo, New York City - Spock\Monroe (CC BY 2.0) as photographed by Ludovic Bertron . The mosaic is created out of miniscule thumbnails (32 pixels wide/tall) of 9486 images from the Collection de l'Art Brut in Lausanne provided on http://musees.lausanne.ch/ using the Metapixel software running on Linux at the 1st Swiss Open Cultural Data Hackathon.
This is a humble tribute to Leonard Nimoy, who died at the time of our hackathon.
Data
- http://musees.lausanne.ch/ (courtesy of :ratio/cruncher)