geographic web


DDIG member Ethan Watrall (Asst. Professor of Anthropology @ MSU) sends us the following information about his upcoming Cultural Heritage Informatics (CHI) field school, which is part of the CHI Initiative at Michigan State University.

Excerpts quoted. For full details, please see this PDF LINK.

Site Link:<http://chi.matrix.msu.edu/fieldschool> Email:watrall@msu.edu

We are extremely happy to officially announce the Cultural Heritage Informatics Fieldschool (ANP491: Methods in Cultural Heritage Informatics). Taking place from May 31st to July 1st (2011) on the campus of Michigan State University, the Cultural Heritage Informatics Fieldschool will introduce students to the tools and techniques required to creatively apply information and computing technologies to cultural heritage materials and questions.

The Cultural Heritage Informatics Fieldschool is a unique experience that uses the model of an archaeological fieldschool (in which students come together for a period of 5 or 6 weeks to work on an archaeological site in order to learn how to do archaeology). Instead of working on an archaeological site, however, students in the Cultural Heritage Informatics Fieldschool will come together to collaboratively work on several cultural heritage informatics projects. In the process they will learn a great deal about what it takes to build applications and digital user experiences that serve the domain of cultural heritage – skills such as programming, user experience design, media design, project management, user centered design, digital storytelling, etc. …

The Cultural Heritage Informatics Fieldschool is open to both graduate students and undergraduates. There are no prerequisites (beyond an interest in the topic). Students from a wide variety of departments, programs, and disciplines are welcome. students are required to enroll for both sections 301 (3 credits) and 631 (3 credits) of ANP 491 (Methods in Cultural Heritage Informatics).

Admission to the Cultural Heritage Informatics Fieldschool is by application only.

To apply, please fill out the Cultural Heritage Informatics Fieldschool Application Form <http://chi.matrix.msu.edu/fieldschool/chi-fieldschool-application>. Applications are due no later than 5pm on March 14th. Students will be notified as to whether they have been accepted by March 25th.

Clifford Lynch drew my attention to “an announcement from the UK Royal Society indicating that in celebration of Open Access week they were opening their entire journal archive for free access till the end of the society’s 350th anniversary year, 30 November 2010. This is a great opportunity to get access to two issues  of Philosophical Transactions of the Royal Society A from August and September 2010 which focus on E-science and contain a number of outstanding papers. See http://rsta.royalsocietypublishing.org/content/368/1925.toc and http://rsta.royalsocietypublishing.org/content/368/1926.toc

A few examples:

  • “Methodological commons: arts and humanities e-Science fundamentals” (abstract and pdf);
  • “Deploying general-purpose virtual research environments for humanities research” (abstract and pdf);
  • “Use of the Edinburgh geoparser for georeferencing digitized historical collections” (abstract and pdf);
  • “Adoption and use of Web 2.0 in scholarly communications” (abstract and pdf);
  • “Retaining volunteers in volunteer computing projects” (abstract and pdf).

figure from “Use of the Edinburgh geoparser for georeferencing digitized historical collections”

I’ve never had the opportunity to visit the impressive ruins of volcanically-conserved Pompeii in Italy. I know it from books, articles and the occasional glimpses from TV or movies but now there’s another way to acquaint oneself with how it must’ve felt to actually walk the streets of the ancient Roman city: Google Maps Street View. For instance, you can walk around in a 3D version of the amphitheater or follow one of the streets. (with thanks to Jack M. Sasson’s agade mailing list)

My colleague Erik Wilde is organizing a workshop on Location and the Web. I’m helping to organize and have already hit some of the email lists with a call for papers. The types of questions explored by this workshop will be directly relevant to researchers interested in using GoogleEarth or Second Life for visualization and analysis (for instance). Here’s his call for papers:

the paper submission deadline for the First Workshop on Location and the Web (LocWeb 2008) is only 18 days away. we now have a pretty stong program committee, and i am looking forward to the submitted paper and of course the workshop itself.

so if you are interested in location information and the web, please consider submitting a paper. the workshop is held in beijing and co-located with WWW2008, the 2008 edition of the world’s premier conference in the area of web technologies.

my personal hope for the workshop is that we will be able to get strong submissions in the area of how to make location information available as part of the web, not so much over the web. there are countless examples of applications with location as part of their data model, which are accessible through some web interface, but there are far less examples of applications which try to turn the web into a location-aware information system. the latter would be the perfect candidate for the workshop.

Reading the recent posts by Fennelle Miller and Kevin Schwarz got me to look into the spatial data a bit more closely. One of the issues that seems to crop up again and again is cost and complexity.

GIS data is still difficult to share dynamically over the Web, but things are changing. GoogleEarth, Google Maps, Open Layers, etc. provide great tools on the client side for viewing and interacting with spatial data (not just points too, but also vector lines and polygons). GoogleEarth and Google Maps are proprietary, but they are available as free downloads or free APIs. They also work with an XML format (KML) that is pretty simple, enjoys a wide user-community and can work with non-Google developed tools.

There are some tools for transforming the ubiquitous ESRI shape files into KML documents (the XML format used by Google’s applications for spatial data)(See this blog post at PerryGeo, see also the post’s comments). Here’s a link to some “how to” discussions on using PHP to read MapInfo (.mif) files to use with Google Maps. Here’s a link to an open source PHP class that reads ESRI shape files, the first step needed in converting them on a server to KML or other formats. The point of all this is that, with some development work, we can transform (to some degree at least) typical GIS data into formats work better on the Web.

Of course, GML (the community developed open standard) is a better choice for GIS data than KML. KML is needed for Google’s great and easy to use visualization tools, but GML is a much more robust standard for GIS data. GML also has the advantage of being an open, non-proprietary XML format. You’re not locked into any one software vendor and you have important data longevity advantages with GML. It should be noted that Open Layers (the open source equivalent of Google Maps) supports GML.

However, I’m not sure of the immediate need to go through all this effort. Sure it’s nice to have GIS data easily viewable on a web-browser or slick visualization tool like GoogleEarth. But the fundamentals of data access, longevity and discovery need to be in place first before we put lots of effort into online visualization.

Instead, we should look at some strategies to make our GIS data easier to find and maintain. And we need to approach the issue pragmatically, since overly complex or elaborate requirements will mean little community uptake. Perhaps we can look at ways of registering GIS datasets (ideally stored in GML) in online directories with some simple metadata (“information about information”). A dataset’s general location (say Lat / Lon point coordinates), some information about authorship, keywords, etc. and a stable link to download the full GIS dataset would be an excellent and simple start. Simple point data describing the general location of a project dataset will be enough to develop an easy map interface for users to find information about locations.

Such directories can be maintained by multiple organizations, and they can share/syndicate their content with tools such as GeoRSS feeds (RSS with geographic point data). It’s easy to develop aggregation services from such feed. You can also use something like Yahoo Pipes to process these feeds into KML formats for use in GoogleEarth! (We do that with Open Context, though it still needs some trouble shooting).

Also, Sean Gilles (with the Ancient World Mapping Center) is doing some fantastic work on “Mush” his project for processing of GeoRSS feeds. See this post and this post for details and exanples. Thus, simple tools like GeoRSS feeds we can contribute toward a low-cost distributed system that makes archaeological datasets much easier to find and discoverable with map-based interfaces and some types of spatial querying (such as buffers). This may be a good way to address some of Fennell Miller’s concerns about recovering and reusing all that hard-won geospatial data.

Of course, site security is an important issue, and finding ways of making our data as accessible as possible without endangering sites or sacred locations is important. I’m glad Kevin Schwarz raised the issue, and it’ll be very useful to learn more about how he and his colleagues are dealing with it.

I’m familiarizing myself with the new terrain of the UC Berkeley School of Information (iSchool), and I’ve had the pleasure of working closely with Erik Wilde, a member of the iSchool faculty with heavy XML research interests.

Anyway, Erik has a new iPhone, the little device which has sent Apple share-prices way up. He showed me the iPhone and how it connects to the web, plus some exciting ideas for new services that can piped into it. It feels like living in the future.

We also talked about what near continuous mobile web connectivity can give you in terms of social networking and geo-referenced data. One thing we’ve mused about is location awareness of the iPhone. It doesn’t have a GPS in it, but you can usually get some geo-location information through the IP addresses of the phone’s Internet connection and a website like this, which relates IP addresses to geographic locations. It might be fun to use the phones as a “friendar” (friend radar) to alert you when you’re near an acquaintance. Sounds fun, except Erik pointed out some obvious privacy issues. This type of thing would obviously be useful for tourists who visit places and augment their reality with web-based information of where they are. Geo-tagging web content should be an obvious concern for archaeologists and museum people who want to interact with the public.

Erik tried all this out, with the iPhone using both the local campus Wifi network and with the AT&T cellular network and an IP address geo-lookup service on the web. The AT&T network resolved to be in London (AT&T knows where his phone is, but doesn’t make it public), but the UC Berkeley network correctly resolved to be in Berkeley.  Some wireless networks will provide better geo-location than others, so interesting geo-location enabled services would work better in some places than others. Who knows, maybe enough networks are sufficiently “geo-localizable” to make building services for iPhone-like devices worthwhile.