geo-location


DDIG member Ethan Watrall (Asst. Professor of Anthropology @ MSU) sends us the following information about his upcoming Cultural Heritage Informatics (CHI) field school, which is part of the CHI Initiative at Michigan State University.

Excerpts quoted. For full details, please see this PDF LINK.

Site Link:<http://chi.matrix.msu.edu/fieldschool> Email:watrall@msu.edu

We are extremely happy to officially announce the Cultural Heritage Informatics Fieldschool (ANP491: Methods in Cultural Heritage Informatics). Taking place from May 31st to July 1st (2011) on the campus of Michigan State University, the Cultural Heritage Informatics Fieldschool will introduce students to the tools and techniques required to creatively apply information and computing technologies to cultural heritage materials and questions.

The Cultural Heritage Informatics Fieldschool is a unique experience that uses the model of an archaeological fieldschool (in which students come together for a period of 5 or 6 weeks to work on an archaeological site in order to learn how to do archaeology). Instead of working on an archaeological site, however, students in the Cultural Heritage Informatics Fieldschool will come together to collaboratively work on several cultural heritage informatics projects. In the process they will learn a great deal about what it takes to build applications and digital user experiences that serve the domain of cultural heritage – skills such as programming, user experience design, media design, project management, user centered design, digital storytelling, etc. …

The Cultural Heritage Informatics Fieldschool is open to both graduate students and undergraduates. There are no prerequisites (beyond an interest in the topic). Students from a wide variety of departments, programs, and disciplines are welcome. students are required to enroll for both sections 301 (3 credits) and 631 (3 credits) of ANP 491 (Methods in Cultural Heritage Informatics).

Admission to the Cultural Heritage Informatics Fieldschool is by application only.

To apply, please fill out the Cultural Heritage Informatics Fieldschool Application Form <http://chi.matrix.msu.edu/fieldschool/chi-fieldschool-application>. Applications are due no later than 5pm on March 14th. Students will be notified as to whether they have been accepted by March 25th.

… that is, according to the [San Jose, CA] Mercury News:

But how did the hundreds of lesser-known Victorian writers regard the world around them? This question and many others in fields like literature, philosophy and history may finally find an answer in the vast database of more than 12 million digital books that Google has scanned and archived. Google, scholars say, could boost the new and emerging field of digital humanities, …

Google recently named a dozen winners of its first-ever “Digital Humanities Awards,” setting aside about $1 million over two years to help teams of English professors, historians, bibliographers and other humanities scholars harness the Mountain View search giant’s algorithms and its unique database of digital books. Among the winners was Dan Cohen, a professor of history and new media at George Mason University, who hopes to come up with a much broader insight into the Victorian mind, overcoming what he calls “this problem of anecdotal history.” ”What’s incredible about the Google database is that they are really approaching a complete database of Victorian books,” Cohen said. “So we have the possibility, for the first time, of going to something that’s less anecdotal, less based on a chosen few authors; to saying, ‘Does that jibe with what the majority of authors were saying at that time?’”

Besides the Victorian study, the winning teams include a partnership between UC Riverside and Eastern Connecticut State University to improve the identification of books published before 1801 in Google’s digital archive, and a team including UC Berkeley and two British universities to develop a “Google Ancient Places” index. It would allow anyone to query Google Books to find titles related to a geographic location and time period, and then visualize the results on digital maps. ”We have the ability to harness vast amounts of information collected from different places,” said Eric Kansa, a UC Berkeley researcher working on the ancient places project, “and put them together to get a whole new picture of ancient cultures.”

Maybe our own Eric Kansa can explain a bit more about the Google Ancient Places project? The announcement stated: “Elton Barker, The Open University, Eric C. Kansa, University of California-Berkeley, Leif Isaksen, University of Southampton, United Kingdom. Google Ancient Places (GAP): Discovering historic geographical entities in the Google Books corpus.” They further wrote:

Google’s Digital Humanities Research Awards will support 12 university research groups with unrestricted grants for one year, with the possibility of renewal for an additional year. The recipients will receive some access to Google tools, technologies and expertise. Over the next year, we’ll provide selected subsets of the Google Books corpus—scans, text and derived data such as word histograms—to both the researchers and the rest of the world as laws permit. (Our collection of ancient Greek and Latin books is a taste of corpora to come.)

And now for something a bit different: “… volunteers are gathering in cities around the world to help bolster relief groups and government first responders in a new way: by building free open-source technology tools that can help aid relief and recovery in Haiti. ‘We’ve figured out a way to bring the average citizen, literally around the world, to come and help in a crisis,’ says Noel Dickover, co-founder of Crisis Commons (crisiscommons.org), which is organizing the effort.” (source: NYT article)

Update 2-17-10: Wired magazine has set up its own Haiti webpage: Haiti Rewired.

My colleague Erik Wilde is organizing a workshop on Location and the Web. I’m helping to organize and have already hit some of the email lists with a call for papers. The types of questions explored by this workshop will be directly relevant to researchers interested in using GoogleEarth or Second Life for visualization and analysis (for instance). Here’s his call for papers:

the paper submission deadline for the First Workshop on Location and the Web (LocWeb 2008) is only 18 days away. we now have a pretty stong program committee, and i am looking forward to the submitted paper and of course the workshop itself.

so if you are interested in location information and the web, please consider submitting a paper. the workshop is held in beijing and co-located with WWW2008, the 2008 edition of the world’s premier conference in the area of web technologies.

my personal hope for the workshop is that we will be able to get strong submissions in the area of how to make location information available as part of the web, not so much over the web. there are countless examples of applications with location as part of their data model, which are accessible through some web interface, but there are far less examples of applications which try to turn the web into a location-aware information system. the latter would be the perfect candidate for the workshop.

Reading the recent posts by Fennelle Miller and Kevin Schwarz got me to look into the spatial data a bit more closely. One of the issues that seems to crop up again and again is cost and complexity.

GIS data is still difficult to share dynamically over the Web, but things are changing. GoogleEarth, Google Maps, Open Layers, etc. provide great tools on the client side for viewing and interacting with spatial data (not just points too, but also vector lines and polygons). GoogleEarth and Google Maps are proprietary, but they are available as free downloads or free APIs. They also work with an XML format (KML) that is pretty simple, enjoys a wide user-community and can work with non-Google developed tools.

There are some tools for transforming the ubiquitous ESRI shape files into KML documents (the XML format used by Google’s applications for spatial data)(See this blog post at PerryGeo, see also the post’s comments). Here’s a link to some “how to” discussions on using PHP to read MapInfo (.mif) files to use with Google Maps. Here’s a link to an open source PHP class that reads ESRI shape files, the first step needed in converting them on a server to KML or other formats. The point of all this is that, with some development work, we can transform (to some degree at least) typical GIS data into formats work better on the Web.

Of course, GML (the community developed open standard) is a better choice for GIS data than KML. KML is needed for Google’s great and easy to use visualization tools, but GML is a much more robust standard for GIS data. GML also has the advantage of being an open, non-proprietary XML format. You’re not locked into any one software vendor and you have important data longevity advantages with GML. It should be noted that Open Layers (the open source equivalent of Google Maps) supports GML.

However, I’m not sure of the immediate need to go through all this effort. Sure it’s nice to have GIS data easily viewable on a web-browser or slick visualization tool like GoogleEarth. But the fundamentals of data access, longevity and discovery need to be in place first before we put lots of effort into online visualization.

Instead, we should look at some strategies to make our GIS data easier to find and maintain. And we need to approach the issue pragmatically, since overly complex or elaborate requirements will mean little community uptake. Perhaps we can look at ways of registering GIS datasets (ideally stored in GML) in online directories with some simple metadata (“information about information”). A dataset’s general location (say Lat / Lon point coordinates), some information about authorship, keywords, etc. and a stable link to download the full GIS dataset would be an excellent and simple start. Simple point data describing the general location of a project dataset will be enough to develop an easy map interface for users to find information about locations.

Such directories can be maintained by multiple organizations, and they can share/syndicate their content with tools such as GeoRSS feeds (RSS with geographic point data). It’s easy to develop aggregation services from such feed. You can also use something like Yahoo Pipes to process these feeds into KML formats for use in GoogleEarth! (We do that with Open Context, though it still needs some trouble shooting).

Also, Sean Gilles (with the Ancient World Mapping Center) is doing some fantastic work on “Mush” his project for processing of GeoRSS feeds. See this post and this post for details and exanples. Thus, simple tools like GeoRSS feeds we can contribute toward a low-cost distributed system that makes archaeological datasets much easier to find and discoverable with map-based interfaces and some types of spatial querying (such as buffers). This may be a good way to address some of Fennell Miller’s concerns about recovering and reusing all that hard-won geospatial data.

Of course, site security is an important issue, and finding ways of making our data as accessible as possible without endangering sites or sacred locations is important. I’m glad Kevin Schwarz raised the issue, and it’ll be very useful to learn more about how he and his colleagues are dealing with it.

I’m familiarizing myself with the new terrain of the UC Berkeley School of Information (iSchool), and I’ve had the pleasure of working closely with Erik Wilde, a member of the iSchool faculty with heavy XML research interests.

Anyway, Erik has a new iPhone, the little device which has sent Apple share-prices way up. He showed me the iPhone and how it connects to the web, plus some exciting ideas for new services that can piped into it. It feels like living in the future.

We also talked about what near continuous mobile web connectivity can give you in terms of social networking and geo-referenced data. One thing we’ve mused about is location awareness of the iPhone. It doesn’t have a GPS in it, but you can usually get some geo-location information through the IP addresses of the phone’s Internet connection and a website like this, which relates IP addresses to geographic locations. It might be fun to use the phones as a “friendar” (friend radar) to alert you when you’re near an acquaintance. Sounds fun, except Erik pointed out some obvious privacy issues. This type of thing would obviously be useful for tourists who visit places and augment their reality with web-based information of where they are. Geo-tagging web content should be an obvious concern for archaeologists and museum people who want to interact with the public.

Erik tried all this out, with the iPhone using both the local campus Wifi network and with the AT&T cellular network and an IP address geo-lookup service on the web. The AT&T network resolved to be in London (AT&T knows where his phone is, but doesn’t make it public), but the UC Berkeley network correctly resolved to be in Berkeley.  Some wireless networks will provide better geo-location than others, so interesting geo-location enabled services would work better in some places than others. Who knows, maybe enough networks are sufficiently “geo-localizable” to make building services for iPhone-like devices worthwhile.