All our hack are belong to us.

Active projects and challenges as of 05.12.2024 02:41.

Hide full text Print Download CSV Data Package


Alain und Laura sind neue Medien

a poetic journey


~ README ~

Two comedians take a poetic journey along selected cultural datasets. They uncover trivia, funny and bizarre facts, and give life to data in a most peculiar way. In French, German, English and some Italian.

Data

Team

  • Catherine Pugin
  • Laura Chaignat
  • Alain Guerry
  • Dominique Genoud
  • Jérôme Treboux
  • Florian Evéquoz
  • Tools: Knime, Python, D3, sweat, coffee and salmon pasta

Ancestors on Wikidata

automatic visualisation of family trees


~ PITCH ~

Visualise family trees using Wikidata.

Team

  • odi
  • Maarten Dammers
~ README ~

Ancestors gadget for Wikidata

The project was initially created during the 1st Swiss Open Cultural Data Hackathon in 2015.

This simple gadget displays the family tree of a given item from wikidata. It does that by querying Wikidata for the father/mother values. If you want to use it on Wikidata, you can enable it in your user settings (enable "Ancestor").

The tool is available at the following URL: https://tools.wmflabs.org/family/ancestors.php?q=Q7742&lang=en

Parameters

The gadget takes 3 parameters in the URL:

  • q (query): this is the root element, it must be a Wikidata ID (e.g. Q7742), default is Q154952 (Willem-Alexander of the Netherlands)
  • lang (language): the language in which the data should be displayed, default is en
  • level: the amount of levels of the family tree, default is 5.

Artmap

puts art.. on a map


~ PITCH ~

Demo: Interactive map on GitHub

Simple webpage created to display the art on the map and allow to click on each individual element and to follow the URL link:

Find art around you on a map!

Data

Team


catexport

tool for working with Wikimedia data


~ PITCH ~

Export Wikimedia Commons categories to your local machine with a convenient web user interface.

Try it here: http://catexport.herokuapp.com

Data

  • Wikimedia Commons API

Team

  • odi
  • and other team members

At #glamhack a number of categories were made available offline by loleg using this shell script and magnus-toolserver.

~ README ~

catexport

The aims of this projects are GLAM institutions that provide their data to the Wikimedia Commons projects and want to extract the categorization done by the community in a structured way. This tool uses the API of Wikimedia Commons in the background. It generates CSV with the following format

filename,category

If a file has multiple categories, there will be multiple entries for it. The first column acts as a unique identifier of the file.


Cultural Music Radio

geolocalised mobile app


~ PITCH ~

This is a cultural music radio which plays suitable music depending on your gps location or travel route. Our backend server looks for artists and musicians nearby the user's location and sends back an array of Spotify music tracks which will then be played on the iOS app.

Server Backend

We use a python server backend to process RESTful API requests. Clients can send their gps location and will receive a list of Spotify tracks which have a connection to this location (e.g. the artist / musician come from there).

Javascript Web Frontend

We provide a javascript web fronted so the services can be used from any browser. The gps location is determined via html5 location services.

iOS App

In our iOS App we get the user's location and send it to our python backend. We then receive a list of Spotify tracks which will be then played via Spotify iOS SDK. There is also a map available which shows where the user is currently located and what is around him.

We offer a nicely designed user interface which allows users to quickly switch between music and map view to discover both environment and music! 8-)

Data and APIs

  • MusicBrainz for linking locations to artists and music tracks

  • Spotify for music streaming in the iOS app (account required)

Team

~ README ~

Culture Radio

This is a cultural music radio which plays suitable music depending on your gps location.
Our backend server looks for artists and musicians nearby the user's location and sends back an array of Spotify music tracks which will then be played on the iOS app.

Wiki

All important facts about this project are dcumented in our wiki.

More information about this project also on http://quappi.com


Diplomatic Documents and Swiss Newspapers in 1914

interlinked & searchable


~ PITCH ~

This project gathers two data sets: Diplomatic Documents of Switzerland and Le Temps Historical Archive for year 1914. Our aim is to find links between the two data sets to allow users to more easily search the corpora together. The project is composed by two parts:

  1. The Geographical Browser of the corpora. We extract all places from Dodis metadata and all places mentioned in each article of Le Temps, we then match documents and articles that refer to the same places and visualise them on a map for geographical browsing.
  2. The Text similarity search of the corpora. We train two models on the Le Temps corpus: Term Frequency Inverse Document Frequency and Latent Semantic Indexing with 25 topics. We then develop a web interface for text similarity search over the corpus and test it with Dodis summaries and full text documents.

Data and source code

Documentation

In this project, we want to connect newspaper articles from Journal de Genève (a Genevan daily newspaper) and the Gazette de Lausanne to a sample of the Diplomatic Documents in Switzerland database (Dodis). The goal is to conduct requests in the Dodis descriptive metadata to look for what appears in a given interval of time in the press by comparing occurrences from both data sets. Thus, we should be able to examine if and how the written press reflected what was happening at the diplomatic level. The time interval for this project is the summer of 1914.

In this context, at first we cleaned the data, for example by removing noise caused by short strings of characters and stopwords. The cleansing is a necessary step to reduce noise in the corpus. We then compared prepared tfidf vectors of words and LSI topics and represented each article in the corpus as such. Finally, we indexed the whole corpus of Le Temps to prepare it for similarity queries. THe last step was to build an interface to search the corpus by entering a text (e.g., a Dodis summary).

Difficulties were not only technical. For example, the data are massive: we started doing this work on fifteen days, then on three months. Moreover, some Dodis documents were classified (i.e. non public) at the time, therefore some of the decisions don't appear in the newspapers articles. We also used the TXM software, a platform for lexicometry and text statistical analysis, to explore both text corpora (the DODIS metadata and the newspapers) and to detect frequencies of significant words and their presence in both corpora.

Dodis Map

Team

~ README ~

Diplomatic Documents and Swiss Newspapers in 1914

This project gathers two data sets: (Diplomatic Documents of Switzerland)[http://dodis.ch/en/home] and Le Temps Historical Archive for year 1914. Our aim is to find occurrences appearing in both data sets and detect the events they have in common.

More details in the project's wiki

$ pip install flask gensim nltk beautifulsoup4
$ python
>>> import nltk
>>> nltk.download()
>>> l
>>> d stopwords

Notes


flying Eduardo

building a universal template for web apps with “Points of Interest”


~ PITCH ~

My idea was/is to build a universal template for web apps with "Points of Interest". At the same time, I was thinking about OpenLinkedData, data redundancies and things like that. I hope I have someday time to document these results. But the most important thing was: learn how a hackathon works and get a sticker for my laptop.

| DEMO |

Data

Team

Links


Graphing the Stateless People in Carl Durheim's Photos

to reconstruct relationships


~ PITCH ~

micha-durheim-relations.jpg

CH-BAR has contributed 222 photos by Carl Durheim, taken in 1852 and 1853, to Wikimedia Commons. These pictures show people who were in prison in Bern for being transients and vagabonds, what we would call travellers today. Many of them were Yenish. At the time the state was cracking down on people who led a non-settled lifestyle, and the pictures (together with the subjects' aliases, jobs and other information) were intended to help keep track of these "criminals".

Since the photos' metadata includes details of the relationships between the travellers, I want to try graphing their family trees. I also wonder if we can find anything out by comparing the stated places of origin of different family members.

I plan to make a small interactive webapp where users can click through the social graph between the travellers, seeing their pictures and information as they go.

I would also like to find out more about these people from other sources ... of course, since they were officially stateless, they are unlikely to have easily-discoverable certificates of birth, death and marriage.

  • Source code: downloads photographs from Wikimedia, parses metadata and creates a Neo4j graph database of people, relationships and places

Data

Team

Links

~ README ~

durheim

Graphing relationships of homeless people photographed by Carl Durheim in Bern, 1852-3.

This code was written for the 1st Swiss Open Cultural Data Hackathon, 27-28 February 2015.

The idea of this project is to explore this collection of photographs from the Swiss Archives: Category:Durheim portraits contributed by CH-BAR. Currently, it is possible to download all of the portraits, parse the associated metadata and load it into a Neo4j graph database. The relationships between people and their places of origin are added to the db and can be browsed with the Neo4j browser interface.

Usage

  • Download and start Neo4j. Open http://localhost:7474 in a browser.
  • In the terminal, enter: python durheim.py

  • Created: 02.09.2024
  • Updated: 04.09.2024
  • Progress: 61% (Share)
  • Permalink

Historical Tarot Freecell

play online with ancient cards


~ PITCH ~

Historical playing cards are witnesses of the past, icons of the social and economic reality of their time. On display in museums or stored in archives, ancient playing cards are no longer what they once were meant to be: a deck of cards made for playful entertainment. This project aims at making historical playing cards playable again by means of the well-known solitaire card game "Freecell".

Historical Tarot Freecell 1.1

Tarot Freecell is a fully playable solitaire card game coded in HTML 5. It offers random setup, autoplay, reset and undo options. The game features a historical 78-card deck used for games and divination. The cards were printed in the 1880s by J. Müller & Cie., Schaffhausen, Switzerland.

The cards need to be held upright and use Roman numeral indexing. The lack of modern features like point symmetry and Arabic numerals made the deck increasingly unpopular.

Due to the lack of corner indices - a core feature of modern playing cards - the vertical card offset needs to be significantly higher than in other computer adaptations.

Instructions

Cards are dealt out with their faces up into 8 columns until all 52 cards are dealt. The cards overlap but you can see what cards are lying underneath. On the upper left side there is space for 4 cards as a temporary holding place during the game (i.e. the «free cells»). On the upper right there is space for 4 stacks of ascending cards to begin with the aces of each suit (i.e. the «foundation row»).

Look for the aces of the 4 suits -- swords, sticks, cups and coins. As soon as the aces are free (which means that there are no more cards lying on top of them) they will flip to the foundation row. Play the cards between the columns by creating lines of cards in descending order, alternating between swords/sticks and cups/coins. For example, you can place a swords nine onto a coins ten, or a cups jack onto a sticks queen.

Placing cards onto free cells (1 card per cell only) will give you access to other cards in the columns. Look for the lower numbers of each suit and move cards to gain access to them. You can move entire stacks, the number of cards moveable at a time is limited to the number of free cells (and empty stacks) plus one.

The game strategy comes from moving cards to the foundations as soon as possible. Try to increase the foundations evenly, so you have cards to use in the columns. If «Auto» is switched on, cards no other card can be placed on will automatically flip to the foundations.

You win the game when all 8 columns are sorted in descending order. All remaining cards will then flip to the foundations, from ace to king in each suit.

Updates

  • 2015/02/27 v1.0: Basic game engine
  • 2015/02/28 v1.1: Help option offering modern suit and value indices in the upper left corner
  • 2015/03/21 v1.1: Retina resolution and responsive design

Data

Author


  • Created: 02.09.2024
  • Updated: 04.09.2024
  • Progress: 20% (Research)
  • Permalink

Historical Views of Zurich Data Upload

into Wikimedia Commons


~ PITCH ~

Preparation of approx. 200 historical photographs and pictures of the Baugeschichtliches Archiv Zurich for upload unto Wikimedia Commons. Metadata enhancing, adding landmarks and photographers as additional category metadata

Link to upload to Wikimedia Commons will follow.

Data

Link to upload will follow

Team

  • Micha Rieser
  • Reto Wick
  • wild
  • Marco Sieber
  • Martin E. Walder
  • and other team members

Lausanne Historic GeoGuesser

game with historic photos and modern maps


~ PITCH ~

A basic GeoGuesser game using pictures of Lausanne from the 19th century. All images are available on http://musees.lausanne.ch/ and are are part of the Musée Historique de Lausanne Collection.

Data

Team


  • Created: 02.09.2024
  • Updated: 04.09.2024
  • Progress: 41% (Prototype)
  • Permalink

Oldmaps online

republishing historical Swiss maps


~ PITCH ~

screenshot: georeferencer

screenshot: map from Ryhiner collection in oldmapsonline

Integrate collections of historical Swiss maps into the open platform www.oldmapsonline.org. At least the two collections Rhyiner (UB Bern) and manuscript maps from Zentralbibliothek Zurich. Pilot for georeferencing maps from a library (ZBZ). Second goal: to load old maps into a georefencing system and create a competition for public. For the hackathon maps and metadata will be integrated in the mentioned platform. At the moment the legal status of metadata from Swiss libraries is not yet clear and only a few maps are in public domain (collection Ryhiner at UB Bern). Another goal is to create a register of historical maps from Swiss collections.

Data

Team

  • Peter Pridal, Günter Hipler, Rudolf Mumenthaler

Links

Lessons Learnt

We encountered several obstacles:

  • Legal aspects: libraries are still reluctant to publish data and even metadata under an open licence. We had the permission by the University Library Berne that published the old maps collection unter public domain. But we were not sure about the use of metadata. Finally we asked for permission and got it by the library's director. Other (or nearly most) works are published under restricted conditions, especially in the platforms e-rara and e-manuscripta.
  • Technical aspects: data are usually kept in silos: databases, web services that keep the access to the files closed. It was even not easy to get reasonable thumbnails. ETH library provided an interface for the access to its bibliografic metadata, but the limited access was not enough for our use: some metadata were not included.

Lesson learnt:

  • Libraries must declare that their metadata are published under a CC-0 license to make reuse possible and clear. This is important also for other projects (like Swissbib linked).
  • Libraries, archives and museums with historical holdings must decide if they want to spread data for a wide usage in order to support cultural and scientific projects. The best framework would be a Open Data Policy for publically financed institutions.
  • How can this contribution to the society be measured? Usually libraries deliver statistics for the usage of their materials to their university or administration. So also these administrations have to rethink: not downloads from the library's website or visits in the reading room are relevant, but the contribution to works in science and culture...
  • Web services like e-rara.ch, e-manuscripta.ch and others should support open formats and APIs.

OpenGLAM Inventory

database of heritage institutions


~ PITCH ~

Photo: Gfuerst, CC by-sa 3.0, via Wikimedia Commons

Idea: Create a database containing all the heritage institutions in Switzerland. For each heritage institution, all collections are listed. For each collection the degree of digitization and the level of openness are indicated: metadata / content available in digital format? available online? available under an open license? etc. - The challenge is twofold: First, we need to find a technical solution that scales well over time and across national borders. Second, the processes to regularly collect data about the collections and their status need to be set up.

Step 1: Compilation of various existing GLAM databases (done)

National inventories of heritage institutions are created as part of the OpenGLAM Benchmark Survey; there is also a network of national contacts in a series of countries which have been involved in a common data collection exercise.

Step 2: GLAM inventory on Wikipedia (ongoing)

Port the GLAM inventory to Wikipedia, enrich it with links to existing Wikipedia articles: See German Wikipedia|Project "Schweizer Gedächtnisinstitutionen in der Wikipedia". Track the heritage institutions' presence in Wikipedia. Encourage the Wikipedia community to write articles about institutions that haven't been covered it in Wikipedia. Once all existing articles have been repertoried, the inventory can be transposed to Wikidata.

Further steps

  • Provide a basic inventory as a Linked Open Data Service
  • Create an inventory of collections and their accessibility status

Data

Team

  • beat_estermann
  • various people from the Wikipedia communtiy
  • and maybe you?

Links

  • Open Data Census: There is an international Open Data Census for Open Government Data. After some discussion, the international OpenGLAM working group has reached the conclusion that the approach used cannot directly be applied to the GLAM sector, as heritage institutions' collections are rather heterogeneous.
  • Open Government Vorgehensmodell (KDZ Zentrum für Verwaltungsforschung, Wien)

  • Created: 02.09.2024
  • Updated: 04.09.2024
  • Progress: 41% (Prototype)
  • Permalink

Picture This

connected frame with historic images


~ PITCH ~

A connected picture frame displaying historic images

farm9.staticflickr.com_8681_16672996925_ec53c13c61.jpg

Story

  • The Picture This “smart” frame shows police photographs of homeless people by Carl Durheim (1810-1890)
  • By looking at a picture, you trigger a face detection algorithm to analyse both, you and the homeless person
  • The algorithm detects gender, age and mood of the person on the portrait (not always right)
  • You, as a spectator, become part of the system / algorithm judging the homeless person
  • The person on the picture is at the mercy of the spectator, once again

farm9.staticflickr.com_8631_16485421338_40e86f1bce.jpg farm9.staticflickr.com_8608_16671627211_074399694d.jpg

How it works

  • Picture frame has a camera doing face detection for presence detection
  • Pictures have been pre-processed using a cloud service
  • Detection is still rather slow (should run faster on the Raspi 2)
  • Here's a little video https://www.flickr.com/photos/tamberg/16053255113/

farm9.staticflickr.com_8608_16672997165_b32d138ee9.jpg farm9.staticflickr.com_8577_16671979502_b6a4cc5bd0.jpg farm9.staticflickr.com_8611_16673072845_aea4ee6a02_z.jpg

Questions (not) answered

  • Who were those people? Why were they homeless? What was their crime?
  • How would we treat them? How will they / we be treated tomorrow? (by algorithms)

Data

Team

  • @ram00n
  • @tamberg
  • and you

Ideas / Iterations

  1. Download the pictures to the Raspi and display one of them (warmup)
  2. Slideshow and turning the images 90° to adapt to the screensize
  3. Play around with potentiometer and Arduino to bring an analog input onto the Raspi (which only has digital I/O)
  4. Connect everything and adapt the slideshow speed with the potentiometer
  5. Display the name (extracted from the filename) below the picture

next steps, more ideas:

  1. Use the Raspi Cam to detect a visitor in front of the frame and stop the slideshow
  2. Use the Raspi Cam to take a picture of the face of the visitor
  3. Detect faces in the camera picture
  4. Detect faces in the images [DONE, manually, using online service]
  5. …merge visitor and picture faces :-)

Material

Software

Links

Not used this time, but might be useful


Portrait Domain

alternate identity social media platform


~ PITCH ~

(Original working title: Portrait Domain)

portraitdomain-2015-02-28-111208.jpg

This is a concept for an gamified social media platform / art installation aiming to explore alternate identity, reflect on the usurping of privacy through facial recognition technology, and make use of historic digitized photographs in the Public Domain to recreate personas from the past. Since the #glamhack event where this project started, we have developed an offline installation which uses Augmented Reality to explore the portraits. See videos on Twitter or Instagram.

View the concept document for a full description.

Data

The exhibit gathers data on user interactions with historical portraits, which is combined with analytics from the web application on the Logentries platform:

Team

Launched by loleg at the hackdays, this project has already had over a dozen collaborators and advisors who have kindly shared time and expertise in support. Thank you all!

Please also see the closely related projects picturethis and graphingthestateless.

~ README ~

Portrait Domain

This is a project demo developed for the OpenGLAM.ch Hackathon in Berne, Switzerland on February 27-28, 2015. For background information see the wiki page (make.opendata.ch).

Stack

Tool Name Advantage
Server distro Ubuntu 14.10 x64 Latest Linux
WSGI proxy Gunicorn Manage workers automatically
Web proxy Nginx Fast and easy to configure
Framework Flask Single file approach for MVC
Data store MongoDB No scheme needed and scalable
DevOps Fabric Agentless and Pythonic

In addition, a Supervisor running on the server provides a daemon to protect the Gunicorn-Flask process.

Developer setup

Based on the MiniTwit application, which is a prototype of Twitter like multiple-user social network. The original application depends on SQLite. However, we have focused on using MongoDB for this project.

To install, set up a config.py which can be just a blank file on your local machine.

(1) Make sure you have a current version of Python and Virtualenv, as well as XML libraries:

sudo apt-get install python virtualenv
sudo apt-get install libxml2-dev libxslt-dev libz-dev

(2) Set up a virtual environment:

virtualenv .venv
. .venv/bin/activate
pip install -r requirements.txt

(3) Run the server:

python minitwit.py

Deployment

1. Install Fabric and clone the Github repo

The DevOps tool is fabric that is simply based on SSH. The fabfile.py and the staging flask files are stored on Github. We should install fabric and download the fabfile.py on the local machine before the deployment. bash sudo pip install fabric wget https://raw.githubusercontent.com/dapangmao/minitwit-mongo-ubuntu/master/fabfile.py fab -l

2. Input IP from the virtual machine

A new VM usually emails IP address and the root password. Then we could modify the head part of the fabfile.py accordingly. There are quite a few cheaper cloud provider for prototyping other than Amazon EC2. For example, a minimal instance from DigitalOcean only costs five dollars a month. If SSH key has been uploaded, the password could be ignored.

env.hosts = ['YOUR IP ADDRESS'] # <--------- Enter the IP address
env.user = 'root'
env.password = 'YOUR PASSWORD'  # <--------- Enter the root password
3. Fire up Fabric

Now it is time to formally deploy the application. With the command below, the fabric will first install pip, git, nginx, gunicorn, supervisor and the latest MongodB, and configure them sequentially. In less than 5 minutes, a Flask and MongoDB application will be ready for use. Since DigitalOcean has its own software repository for Ubuntu, and its VMs are on SSD, the deployment is even faster, which is usually finished in one minute. python fab deploy_minitwit


Public Domain Game

with Web-linked cards


~ PITCH ~

A card game to discover the public domain. QR codes link physical cards with data and digitized works published online. This project was started at the 2015 Open Cultural Data Hackathon.

Sources

Team

  • danib
  • Mario Purkathofer
  • Joël Vogt
  • Bruno Schlatter
  • loleg
~ README ~

cardgame


  • Created: 02.09.2024
  • Updated: 04.09.2024
  • Progress: 61% (Share)
  • Permalink

Schweizer Kleinmeister

An Unexpected Journey


~ PITCH ~

This project shows a large image collection in an interactive 3D-visualisation. About 2300 prints and drawings from "Schweizer Kleinmeister" from the Gugelmann Collection of the Swiss National Library form a cloud in the virtual space.

The images are grouped according to specific parameters that are automatically calculated by image analysis and based on metadata. The goal is to provide a fast and intuitive access to the entire collection, all at once. And this not accomplished by means of a simple list or slideshow, where items can only linearly be sorted along one axis like time or alphabet. Instead, many more dimensions are taken into account. These dimensions (22 for techniques, 300 for image features or even 2300 for descriptive text analysis) are then projected onto 3D space, while preserving topological neighborhoods in the original space.

The project renounces to come up with a rigid ontology and forcing the items to fit in premade categories. It rather sees clusters emerge from attributes contained in the images and texts themselves. Groupings can be derived but are not dictated.

The user can navigate through the cloud of images. Clicking on one of the images brings up more information about the selected artwork. For the mockup, three different non-linear groupings were prepared. The goal however is to make the clustering and selection dependent on questions posed by any individual user. A kind of personal exhibition is then curated, different for every spectator.

<h3 class=" page-header pb-3 mb-4 mt-5"></h3>

Update: Adoption for Virtual Reality

<h3 class=" page-header pb-3 mb-4 mt-5"></h3>

For more info, see here: http://www.mathiasbernhard.ch/floating-through-an-image-galaxy-in-vr/

(by Mathias on 15|05|29)

Open Data used

Gugelmann Collection, Swiss National Library

http://opendata.admin.ch/en/dataset/sammlung-gugelmann-schweizer-kleinmeister

http://commons.wikimedia.org/wiki/Category:CH-NB-Collection_Gugelmann

Techniques / Libraries

Crawling, extraction, image processing, machine learning:

  • Python (BeautifulSoup, sklearn, skimage)
  • Java (RegEx)

Places search for Lat/Lng coordinates:

  • GoogleMaps API

Visualization:

  • Processing.org

Team

Links

http://www.mathiasbernhard.ch/schweizer-kleinmeister-an-unexpected-journey/


Swiss Games Showcase

from a database of the computer game scene


~ PITCH ~

A website made to show off the work of the growing Swiss computer game scene. The basis of the site is a crowdsourced list of Swiss games. This list is parsed, and additional information on each game is automatically gathered. Finally, a static showcase page is generated.

Data

Team

Links


The Endless Story

computer generated narration


~ PITCH ~

A project aiming to tell a story (connected facts) using the structured data of wikidata.org

https://vimeo.com/124210155

Data

Team

Links

~ README ~

The Endless Story

This project is being developed as part of the #GLAMhack event at the National Library in Switzerland.

The Endless Story

Team

Uses

Deployment

./deploy.sh

Thematizer

enriching cultural information online


~ PITCH ~

Problem:

There are a lot of cultural data (meta-data, texts, videos, photos) available to the community, in Open Data format or not, that are not valued and sleep in data silos. These data could be used, non-exhaustively, in the areas of tourism (services or products creations highlighting the authenticity of the visited area) or in museums (creation of thematic visits based on visitors profiles)

Proposition:

We propose to work on an application able to request different local specialized cultural datasets and make links, through the result, with the huge, global, universal, Wikipedia and Google Map to enrich the cultural information returned to the visitor.

Prototype 1 (Friday):

One HTML page with a search text box and a button. It requests Wikipedia with the value, collect the JSON page's content, parse the Table of Content in order to keep only level 1 headers, display the result in both vertical list and word cloud.

Prototype 2 (Saturday):

One HTML page accessing the dataset from the Mediathèque of Valais (http://xml.memovs.ch/oai/oai2.php?verb=ListIdentifiers&metadataPrefix=oai_dc&set=k), getting all the "qdc" XML pages and displaying them in a vertical list. When you click on one of those topics, on the right of the page you will get some information about the topic, one image (if existing) and a cloud of descriptions. Clicking on one of the descriptions will then request Wikipedia with that value and display the content. If we have enough time we will also get a location tag from the Mediathèque XML file and then display a location on Google Map.

Demo

Resources (examples, similar applications, etc.):

This idea is based on a similar approach that has been developed during Museomix 2014 : http://www.museomix.org/prototypes/museochoix/ . We are trying to figure out how to extend this idea to other contexts than the museums and to other datasets than those proposed by that particular museum.

Data

Example of made requests:

Other potentially interesting datasets for future work:

Team

Links


ViiSoo

water visualisation and sonification remix


~ PITCH ~

Exploring a water visualisation and sonification remix with Open Data, to make it accessible for people who don't care about data.

Why water? Water is a open accessible element for life. It flows like data and everyone should have access to it.

We demand Open Access to data and water.

Join us, we have stickers.

Open Data Used

Tec / Libraries

Team

Created by the members of Kollektiv Zoll

ToDos

  • Flavours (snow, lake, river)
  • Image presentation
  • Realtime Input and Processing of Data from an URL

See it live


  • Created: 02.09.2024
  • Updated: 04.09.2024
  • Progress: 61% (Share)
  • Permalink

WikiProject "Cultural heritage"

Swiss monuments in Wikidata



Zürich 1799

published to Wikipedia and Commons


~ PITCH ~

Adding the contents of the publication "Zürich 1799: Eine Stadt erlebt den Krieg", published by city of Zurich under a CC-BY-SA-3.0 licence to Wikipedia and the images to Wikimedia Commons.

See: First Draft of Wikipedia Article

Data

Team

  • Martin E. Walder
  • Micha Rieser
  • Reto
  • Marco Sieber
  • * wild


Challenges

  • Created: 02.09.2024
  • Updated: 04.09.2024
  • Permalink

Spock Monroe Art Brut

photo mosaic and tribute


~ PITCH ~

This is a photo mosaic based on street art in SoHo, New York City - Spock\Monroe (CC BY 2.0) as photographed by Ludovic Bertron . The mosaic is created out of miniscule thumbnails (32 pixels wide/tall) of 9486 images from the Collection de l'Art Brut in Lausanne provided on http://musees.lausanne.ch/ using the Metapixel software running on Linux at the 1st Swiss Open Cultural Data Hackathon.

This is a humble tribute to Leonard Nimoy, who died at the time of our hackathon.

Data

Team