All our hack are belong to us.

Active projects and challenges as of 21.12.2024 14:57.

Hide full text Print Download CSV Data Package


Back to the Greek Universe


~ PITCH ~

Back to the Greek Universe is a web application that allows users to explore the ancient Greek model of the universe in virtual reality so that they can realize what detailed knowledge the Greeks had of the movement of the celestial bodies observable from the Earth's surface. The model is based on Claudius Ptolemy's work, which is characterized by the fact that it adopts a geo-centric view of the universe with the earth in the center.

Ptolemy placed the planets in the following order:

  1. Moon
  2. Mercury
  3. Venus
  4. Sun
  5. Mars
  6. Jupiter
  7. Saturn
  8. Fixed stars

Renaissance woodcut illustrating the Ptolemaic sphere modelThe movements of the celestial bodies as they appear to earthlings are expressed as a series of superposed circular movements (see deferent and epicycle theory), characterized by varying radius and speed. The tabular values that serve as inputs to the model have been extracted from literature.

Demo Video

Claudius Ptolemy (~100-160 AD) was a Greek scientist working at the library of Alexandria. One of his most important works, the «Almagest», sums up the geographic, mathematical and astronomical knowledge of the time. It is the first outline of a coherent system of the universe in the history of mankind.

Back to the Greek Universe is a VR model that rebuilds Ptolemy's system of the universe on a scale of 1/1 billion. The planets are 100 times larger, the earth rotates 100 times more slowly. The planet orbits periods are 1 million times faster than they would be according to Ptolemy's calculations.

Back to the Greek Universe was coded and presented at the Swiss Open Cultural Data Hackathon/mix'n'hack 2019 in Sion, Switzerland, from Sept 6-8, 2019, by Thomas Weibel, Cédric Sievi, Pia Viviani and Beat Estermann.

Instructions

This is how to fly Ptolemy's virtual spaceship:

  • Point your smartphone camera towards the QR code, tap on the popup banner in order to launch into space.

  • Turn around and discover the ancient greek solar system. Follow the planets' epicyclic movements (see above).

  • Tap in order to travel through space, in any direction you like. Every single tap will teleport you roughly 18 million miles forward.

  • Back home: Point your device vertically down and tap in order to teleport back to earth.

  • Gods' view: Point your device vertically up and tap in order to overlook Ptolemy's system of the universe from high above.

The cockpit on top is a time and distances display: The years and months indicator gives you an idea of how rapidly time goes by in the simulation, the miles indicator will always display your current distance from the earth center (in million nautical miles).

Data

The data used include 16th century prints of Ptolemy's main work, the Almagest (both in greek and latin) and high-resolution surface photos of the planets in Mercator projection. The photos are mapped onto rotating spheres by means of Mozilla's web VR framework A-Frame.

Earth Earth map (public domain)

Moon Moon map (public domain)

Mercury Mercury map (public domain)

Venus Venus map (public domain)

Sun Sun map (public domain)

Mars Mars map (public domain)

Jupiter Jupiter map (public domain)

Saturn Saturn map (public domain)

Stars map (milky way) (Creative Commons Attribution 4.0 International)

Primary literature

Secondary literature

Version history

  • 2019/09/07 v1.0: Basic VR engine, interactive prototype\
  • 2019/09/08 v1.01: Cockpit with time and distance indicator\
  • 2019/09/13 v1.02: Space flight limited to stars sphere, minor bugfixes\
  • 2019/09/17 v1.03: Planet ecliptics adjusted

Media

  • Back to the Greek Universe Video (mp4), public domain

Team


CoViMAS


~ PITCH ~

Collaborative Virtual Museum for All Senses (CoViMAS) is an extended virtual museum which engages all senses of visitors. It is a substantial upgrade and expansion of our award-winning Glamhack 2018 project "Walking around the Globe" http://make.opendata.ch/wiki/project:virtual_3d_exhibition which had the DBIS Group from the University of Basel team up with the ETH Library to introduce a prototype of an exhibition in Virtual Reality.

CoViMAS aims to provide a collaborative environment for multiple visitors in the virtual museum. This feature allows them to have a shared experience through different virtual reality devices.

Additionally, CoViMAS enriches the user experience in virtual space by providing physical objects which can be manipulated by the user in virtual space. Thanks to the mix'n'hack organizers and FabLab (https://fablab-sion.ch/), the user will be able to touch postcards, view them closely, and feel their texture.

To add the modern touch to the older pictures in the provided data, we add colorized images alongside the existing ones, to have a more lively look into the past using the pictures in the Virtual Museum.

Video: https://make.opendata.ch/wiki/_media/project:covimas.mp4

Project Timeline

Day One

CoViMAS joins forces of different disciplines to form a group which contains Maker, content provider, developer(s), communicator, designer and user experience expert. Having different backgrounds and expertise made a great opportunity to explore different ideas and opportunities to develop the horizons of the project.

Two vital components of this project is Virtual Reality headset and Datasets which are going to be used. HTC Vive Pro VR headsets are converted to wireless mode after our last experiment which prove the freedom of movement without wires attached to the user, increases the user experience and feasibility of usage.

Our team content provider and designer spent invaluable amount of time to search for the representative postcards and audio which can be integrated in the virtual space and have the potential to improve the virtual reality experience by adding extra senses. This includes selecting postcards which can be touched and seen in virtual and non-virtual reality. Additionally, to improve the experience, and idea of hearing a sound which is related to the picture being viewed popped up. This audio should have a correlation with the picture being viewed and recreate the sensation of the picture environment for the user in virtual world.

To integrate the modern methods of Image manipulation through artificial Intelligence, we tried using Deep Learning method to colorize the gray-scale images of the "otografien aus dem Wallis von Charles Rieder". The colored images allow the visitor get a more sensible feeling of the pictures he/she is viewing. The initial implementation of the algorithm showed the challenges we face, for example the faded parts of the pictures or scratched images could not very well be colorized.

img_20190908_112033_1_.jpg

Day Two

Although the VR exhibition is taken from our previous participation in Glamhack2018, the exhibition needed to be adjusted to the new content. We have designed the rooms to showcase the dataset "Postkarten aus dem Wallis (1890-1950)". at this point, the selected postcards to be enriched with additional senses are sent to the Fablab, to create a haptic card and a feather pallet which is needed to be used alongside one postcard which represent a goose.

the fabricated elements of our exhibition get attached to a tracker which can be seen through VR glasses and this allows the user to be aware of location of the object, and sense it.

The Colorization improved through the day, by some alteration in training setup and the parameters used to tune the images. The results at this stage are relatively good.

And the VR exhibition hall is adjusted to be automatically load images from postcards and also the colorized images alongside their original color.

And late night, when finalizing the works for the next day, most of our stickers have changed status from "Implementation" phase to "Done" Phase!

Day Three

CoViMAS is getting to the final stage in the last day. The Room design is finished and the location of the images on the wall are determined. The tracker location is updated in the VR environment to represent the real location of the object. With this improvement the postcard can be touched while being viewed simultaneously.

Data

Team


Human Name Creativity


~ PITCH ~

Following the last years project about dog names Dog Name Creativity Survey of New York City Dog Name Creativity Survey of New York City. The focus this year was on human names. The swiss post provides datasets with the top 5 names from each postal code. The goal was again to create a creativity index. But this year, under the motto of user involvement with the option to enter your own name, set the language your name is from and to see yourself in the ranking. The datasets are not perfect for this task, because they don't contain all the names, only the top 5 per postal code. So the user has a high chance to get a "score-buff" for uniqueness. Nevertheless it is a fun project.

Unfortunately it wasn't finished until the end of the Hackathon, no UI, but here's the last draft version of the code:

import pandas as pd

HaufeD_ = {"e":1,"n":2,"i":3,"r":4,"s":5,"-":5,"t":6,"a":7,"d":8,"h":9,"u":10,"l":11,"c":12,"g":13,"m":14,"o":15,"b":16,\
           "w":17,"f":18,"k":19,"z":20,"v":21,"p":22,"ü":23,"ä":24,"ö":25,"j":26,"x":27,"y":28,"q":29}
HaufeF_ = {"e":1,"a":2,"s":3,"t":4,"i":5,"-":5,"r":6,"n":7,"u":8,"l":9,"o":10,"d":11,"m":12,"c":13,"p":14,"é":15,"v":16,\
           "h":17,"g":18,"f":19,"b":20,"q":21,"j":22,"à":23,"x":24,"è":25,"ê":26,"z":27,"y":28,"k":29,"ô":29,"û":29,"w":29\
           ,"â":29,"î":29,"ü":29,"ù":29,"ë":29,"Œ":29,"ç":29,"ï":29}
#HaufeI_ =
landics = {"d":HaufeD_,"f":HaufeF_}

def KreaWert(name_,lan):
    dic = landics[lan]
    name_ = str(name_)
    wert_ = 0
    for letter in str.lower(name_):
        temp_ = 0
        if letter in dic :
            temp_ += dic[letter]
            wert_ += temp_
        else:
            temp_ += 20
            wert_ += temp_
    try:
        H_[name_]
        wert_ = wert_* ((Hmax-H_[name_])/(Hmax-1)*5 + 0.2)
    except KeyError as exception:
        pass
    if len(name_) < (DNL-2) or len(name_) > (DNL+2):
        wert_ = wert_/10*8
    return round(wert_,1)

df = pd.read_csv("vornamen_proplz.csv", sep = ",")
df["vorname"] = df["vorname"].str.strip()

insgeNamLan_ = 0
for name in df["vorname"]:
    insgeNamLan_ += len(str(name))

#unkreativitätsrange = weniger als 4 / mehr als 8
DNL = round(insgeNamLan_ / len(df["vorname"]))

#Häufigkeit der Namen = H_
H_ = {}
counter = 0
for name in df["vorname"]:
    if name in H_:
        H_[name] += df["anzahl"][counter]
        counter += 1
    else:
        H_[name] = df["anzahl"][counter]
        counter +=1
sortH_ = sorted(H_.values())
Hmax = sortH_[len(sortH_)-1]
Hmin = sortH_[0]

lan = input("Set the language of your name (d/i/f): ")
name_ = input("What is your first name? ")

print(KreaWert(name_,lan))

Data

Vor- und Nachnamen pro Postleitzahl:

Team


Opera Forever


~ PITCH ~

Opera Forever is an online collaboration platform and social networking site to collectively explore large amounts of opera recordings.

The platform allows users to tag audio sequences with various types of semantics, such as personal preference, emotional reaction, specific musical features, technical issues, etc. Through the analysis of personal preference and/or emotional reaction to specific audio sequences, a characterization of personal listening tastes will be possible, and people with similar (or very dissimilar) tastes can be matched. The platform will also contain a recommendation system based on preference information and/or keyword search.

Background: The Bern University of the Arts has inherited a large collection of about 15'000 hours of bootleg live opera recordings. Most of these recordings are unique, and many individual recordings rather long (up to 3-4 hours), hence the idea of segmenting the recordings so as to allow for the creation of semantical links between segments to enhance the possibilities of collectively exploring the collection.

Core Idea: Users engaging in "active" listening leave semantic traces behind that can be used as a resource to guide further exploration of the collection, both by themselves and by third parties. The approach can be used for an entire spectrum of users, ranging from occasional opera listeners, through opera amateurs, to interpretation researchers. The tool can be used as a collaborative tagging platform among research teams or within citizen science settings. By putting the focus on the listeners and their personal reaction to the audio segments, the perspective of analysis can be switched to the user, e.g. by creating typologies or clusterings of listening tastes or by using the approach for match-making in social settings.

Demo Video

https://vimeo.com/358615682

Proof of Concept

Opera Forever (demo application)

A first proof of concept was developed at the Swiss Open Cultural Data Hackathon 2019 in Sion and contains the following features:

  • The user can browse through and listen to the recordings of different performances of the same opera.
  • The individual recordings are segmented into their different parts.
  • By using simple swiping gestures, the user can navigate between the individual segments of the same recording (swiping left or right) or between different recordings (swiping up or down) - the swiping is not yet implemented, but you can click on the respective arrows.
  • For each segment, the user can indicate to what extent they like that particular segment (1 to 5 stars). - not implemented yet
  • Based on this information, individual preference lists and collective hit-parades are generated. - not implemented yet
  • Also, it will be possible to cluster users according to their musical taste, which opens up the possibility to match users based on their musical taste or to build recommendation systems. not implemented yet

A second proof of concept was developed in the context of the Master Thesis "Einbindung und Nutzung von Kulturdaten in Wikidata im Zusammenhang mit der Ehrenreich-Sammlung" (Johanna Hentze 2020), containing the following features:

  • selecting audio recordings from the Ehrenreich Collection
  • editing meta information about recordings/performances
  • manual segmentation of audio files (adding, editing)
  • visual editing of audio sequence ("Audacity" style)
  • semantic linking to external resources (e.g. Wikidata)

Segmentation Editor (demo prototype)

Data

  • Metadata: Ehrenreich Collection Database
  • Audio Files: Digitized audio recordings from the Ehrenreich Collection (currently not available online; many of them presenting copyright issues)

  • Photographs of artists: Taken from a variety of websites; most of them presenting copyright issues.

Documentation

Google Doc with Notes

Team


TimeGazer


~ PITCH ~

Welcome to TimeGazer: A time-traveling photo booth enabling you to send greetings from historical postcards.

Based on the wonderful "Postcards from Valais (1890 - 1950)" dataset, consisting of nearly 4000 historic postcards of Valais, we create a prototype for Mixed-Reality photo booth.

Choose a historic postcard as a background and a person will be style-transferred virtually onto the postcard.

https://vimeo.com/358591907

Photobomb a historical postcard

A photo booth for time traveling\ send greetings from the poster\ virtually enter the historical postcard

Mockup of the process.

Based on the wonderful "Postcards from Valais (1890 - 1950)" dataset, consisting of nearly 4000 historic postcards of Valais, we create a prototype for Mixed-Reality photo booth. One can choose a historic postcard as a background and a person will be style-transferred virtually onto the postcard.

Potentially with VR-trackerified things to add choosable objects virtually into the scene.

Technology

This project is roughly based on a project from last year, which resulted in an active research project at Databases and Information Systems group of the University of Basel: VIRTUE. Hence, we use a similar setup:

Results

Project

Blue screen

Printer box

Standard box on MakerCase:

Modified for the input of paper and output of postcard:

The SVG and DXF box project files.

Data

Quote from the data introduction page:

A collection of 3900 postcards from Valais. Some highlights are churches, cable cars, landscapes and traditional costumes.\ Source: Musées cantonaux du Valais -- Musée d'histoire

Team

  • Dr. Ivan Giangreco
  • Dr. Johann Roduit
  • Lionel Walter
  • Loris Sauter
  • Luca Palli
  • Ralph Gasser


Challenges