Thu 18 Aug 2011
(Cross posted on Heritage Bytes)
Thu 18 Aug 2011
(Cross posted on Heritage Bytes)
Sun 27 Mar 2011
As many of us know, the annual SAA conference is about to begin in Sacramento, California. Like all large conferences, scheduling represents a complex and difficult juggling act. So, it is not too much of a surprise when awkward schedule conflicts emerge. Unfortunately this year, two digitally themed sessions coincide in the schedule (see the Saturday schedule, 1-3ish PM slot).
The silver-lining is that these two sessions are digitally themed and both make excellent use of the Web. That means you can connect with the ideas and people involved in these sessions asynchronously. Colleen Morgan organized a session on blogging in archaeology. As one would expect from the subject matter, a great deal of excellent and fascinating discussion can be found online, contributed by many thoughtful archaeological bloggers. Here’s a link to a post that kicked off the discussion. The other digitally themed session was organized by Josh Wells, convener of DDIG. This session, an electronic symposium, also has excellent Web content published on Visible Past. Visible Past is an electronic publication platform built off WordPress, a powerful blogging application. These papers (and since they are more formal and less conversational, so I’ll call them “papers”, not “posts”) can be found here: http://visiblepast.net/see/archives/939
Tue 1 Mar 2011
DDIG member Ethan Watrall (Asst. Professor of Anthropology @ MSU) sends us the following information about his upcoming Cultural Heritage Informatics (CHI) field school, which is part of the CHI Initiative at Michigan State University.
Excerpts quoted. For full details, please see this PDF LINK.
Site Link:<http://chi.matrix.msu.edu/fieldschool> Email:email@example.com
We are extremely happy to officially announce the Cultural Heritage Informatics Fieldschool (ANP491: Methods in Cultural Heritage Informatics). Taking place from May 31st to July 1st (2011) on the campus of Michigan State University, the Cultural Heritage Informatics Fieldschool will introduce students to the tools and techniques required to creatively apply information and computing technologies to cultural heritage materials and questions.
The Cultural Heritage Informatics Fieldschool is a unique experience that uses the model of an archaeological fieldschool (in which students come together for a period of 5 or 6 weeks to work on an archaeological site in order to learn how to do archaeology). Instead of working on an archaeological site, however, students in the Cultural Heritage Informatics Fieldschool will come together to collaboratively work on several cultural heritage informatics projects. In the process they will learn a great deal about what it takes to build applications and digital user experiences that serve the domain of cultural heritage – skills such as programming, user experience design, media design, project management, user centered design, digital storytelling, etc. …
The Cultural Heritage Informatics Fieldschool is open to both graduate students and undergraduates. There are no prerequisites (beyond an interest in the topic). Students from a wide variety of departments, programs, and disciplines are welcome. students are required to enroll for both sections 301 (3 credits) and 631 (3 credits) of ANP 491 (Methods in Cultural Heritage Informatics).
Admission to the Cultural Heritage Informatics Fieldschool is by application only.
To apply, please fill out the Cultural Heritage Informatics Fieldschool Application Form <http://chi.matrix.msu.edu/fieldschool/chi-fieldschool-application>. Applications are due no later than 5pm on March 14th. Students will be notified as to whether they have been accepted by March 25th.
Mon 31 Jan 2011
Another country, same upheaval, same “opportunities” for looting of archaeological patrimony (sites, museums, storage facilities): after Iraq, now it’s Egypt’s turn. Hopefully, this will only be an unfortunate but short-lived episode. A specialized Facebook group has been started to attempt to gather news, Restore + Save the Egyptian Museum!, an upgrade from the Iraqcrisis mailing list approach—in 2003, Facebook wasn’t yet the mass phenomenon it is now. Also, an ad hoc website, Egyptological Looting Database 2011, has been thrown up to try to keep track of what (and to which extent) we know about looting in different regions of the country. Compared with The Iraq War & Archaeology, this site endeavors to be a bit more systematic. I applaud all initiatives. Again, my sincerest hope is that all this will prove to be “overkill” but history has taught us to be prepared for the worst.
Tue 7 Dec 2010
This is a topical blog about archaeology and digital data, so this post may appear off topic at first, but trust me it is not.
The Republican Party (or GOP), in its quest to appear like the party of “fiscal responsibility” [sic], has launched a new crowd-sourcing site to go after “questionable” grants made by the National Science Foundation (NSF). NSF funds some archaeology, so this development is of interest to readers of Digging Digitally.
While one can take issue with the wisdom of cutting NSF’s budget versus other areas of the federal budget, what makes this development noteworthy is the explicit use of crowd-sourcing to politicize specific funding decisions. The GOP sponsored site asks users to:
In the “Search Award For” field, try some keywords, such as: success, culture, media, games, social norm, lawyers, museum, leisure, stimulus, etc. to bring up grants. If you find a grant that you believe is a waste of your taxdollars, be sure to record the award number.
OK. So does that mean “museums”, “social norms” and “culture” are all implicitly a waste of money? I guess “success” is a waste too. Naturally, you can’t cut any other area of government spending (like defense or entitlements) from the GOP site. It’s a nice way to make “crowd-sourcing” less than democratic, since essentially this website predetermines your choices in what you will cut. But I’m going off track…
More to the point, how should the average lay person understand an NSF award enough to evaluate it, especially when all that is available is a title and a short abstract? I’m not qualified to evaluate many grants in archaeology because different areas of specialization require so much background knowledge. I consider myself pretty scientifically literate and I can barely understand NSF award information in some areas of computer science, economics, climate research, etc.
Nevertheless, I trust that the NSF awards in these areas outside of my field are probably worthwhile. That’s because I generally trust the scientific community and scientific processes (grant reviews, peer-review). Science is not perfect, but it does tend to value skepticism, evidence, and intellectual freedom.
The GOP’s crowd-sourcing effort shows an implicit, but fundamental distrust of the scientific community. The GOP wants you to second-guess expert opinion, because scientific expertise is by its nature suspect in contemporary Republican Party ideology. No doubt this will further politicize climate science, evolutionary science, and many other areas archaeologists care about.
Lastly, the whole “fiscal responsibility” thing is pretty laughable. Via Twitter, Tom Scheinfeldt wrote:
Total NSF budget=$7 billion. Cost of yesterday’s tax cuts=$700 billion. Targeting NSF is just a smokescreen to keep budget hawks preoccupied
Good point! I politely sent a note about Tom’s point via the GOP site that maybe they could look for budget savings more fruitfully in entitlements or defense spending.
Wed 24 Feb 2010
A new report came out: The Future of the Internet IV, by J. Anderson and L. Rainie. It’s the 4th volume in this quasi-annual series (previous volumes also available online). This is an important study.
A survey of nearly 900 Internet stakeholders reveals fascinating new perspectives on the way the Internet is affecting human intelligence and the ways that information is being shared and rendered.
The web-based survey gathered opinions from prominent scientists, business leaders, consultants, writers and technology developers. It is the fourth in a series of Internet expert studies conducted by the Imagining the Internet Center at Elon University and the Pew Research Center’s Internet & American Life Project. In this report, we cover experts’ thoughts on the following issues:
“Three out of four experts said our use of the Internet enhances and augments human intelligence, and two-thirds said use of the Internet has improved reading, writing and rendering of knowledge,” said Janna Anderson, study co-author and director of the Imagining the Internet Center. “There are still many people, however, who are critics of the impact of Google, Wikipedia and other online tools.” Read more…
Mon 26 Oct 2009
A new publication from Microsoft Research is now available (open access) online: The Fourth Paradigm: Data-Intensive Scientific Discovery, Edited by Tony Hey, Stewart Tansley, and Kristin Tolle. Although as usual not specifically aimed at archaeology, there’s some interesting stuff. You can download it whole or by paper:
Part 1: Earth and Environment
Part 2: Health and Wellbeing
Part 3: Scientific Infrastructure
Part 4: Scholarly Communication
Thu 4 Dec 2008
After building out mostly idiosyncratic, departmental-level IT solutions for specific, outside-funded research projects, universities and other institutions of higher learning are now grappling with the expanding and changing demands put on them by their constituents: the academic research community.
The November-December issue, “Focusing on the Common Good for Higher Education,” of EDUCAUSE Review, a bimonthly magazine for the higher education IT community (freely accessible online), addresses these and other issues. It is a good read. Let me touch on some issues raised. Clifford Lynch (“The Institutional Challenges of Cyberinfrastructure and E-Research”) remarks how the advent of computer resources has fundamentally changed scholarly practice, from engineering to the humanities. The latter were the latecomers but often created the more ingenious and transformative applications. Beyond the hardware-oriented solutions, more and more effort has gone into “software-driven technologies such as high-performance data management, data analysis, mining and visualization, collaboration tools and environments, and large-scale simulation and modeling systems. Content, in the form of reusable and often very large datasets and databases—numeric, textual, visual—is an integral part of advanced information technology also.”
Development of the academic cyberinfrastructure
The cyberinfrastructure necessary for modern scientific research was at first built out by national institutions, e.g., the U.S. National Science Foundation. The prohibitive cost and scarcity of expertise made this approach the natural choice. In a second stage, individual research units within institutions of higher learning began to deploy specifically-tailored IT solutions for projects usually funded to a large extent by national funding organizations. As the need for collaboration between different institutions has grown together with the pace of communications—as in the larger society, I’d say—the need for interoperability and some type of openness has risen. In some fields, a professional organization took it upon them to establish depositories and the like to facilitate the exchange of ideas almost in real time, rather than via the old-style journals with their built-in time lag. In others, individual institutions stepped up to the plate. The fact remains that it is becoming more and more clear that campus-level infrastructures need to be built which can be used by all scholars, also the ones who aren’t able to obtain funding as easily and often don’t require specialized solutions anyway. A well-designed, easy-to-use institution-level cyberinfrastructure is becoming a must. Again though, care needs to be taken to ensure the easy connection with other institutions’ IT infrastructure. This all needs to be thought through in consultation: different institutions, funding organizations and countries of jurisdiction have different rules on how to deal with privacy issues regarding research data gathered from people and so on. It will also fall mainly on the IT services of institutions of higher learning to be responsible for reliable, secure storage with redundancy, for the longer duration. How long should one hold on to the ever-growing mountain of research data? The same data also will have to be online to the extent that a scholar can access it also when not at his campus office: so-called “cloud” computing. Virtual projects with collaborators spread out over many institutions need their data to reside in this “cloud.” Many challenges of implementation remain to be worked out.
In “Supporting the ‘Scholarship’ in E-Scholarship,” Christine L. Borgman advocates “e-scholarship,” i.e., “new forms of scholarship that are more information-intensive, data-intensive, distributed, collaborative, and multidisciplinary.” She states: “[a]lthough the data deluge presents the most immediate challenge for information technology strategy, academic planning, and research infrastructure, it is also the area of e-scholarship most subject to hype. Wired recently pronounced that science no longer needs theory, models, metadata, ontologies, or ‘the scientific method’: mining the data deluge replaces all of them.” I would call this the Google approach to research: if only one can find the perfect algorithm, all problems can be solved given enough data. This is of course naïve. Facts and observations do not exist in a vacuum, just like in particle physics, the act of observation changes the facts. For example, when an archaeologist excavates, he/she destroys the context. The data remain but cannot be replicated later. This is why the reasoning behind research strategies and the circumstances are so important. In the social sciences too, field or study data gathered from human subjects is unique, cannot be done over exactly. “E-scholarship, as a form of scholarship enabled by cyberinfrastructure, should be viewed as evolution more than revolution. The pace of that evolution varies widely within and between disciplines, campuses, and countries. Distributed and multidisciplinary collaborations are both facilitated and complicated by cyberinfrastructure. Similarly, the changing forms of information and the spreading data deluge offer not only a wealth of new research opportunities but also a daunting array of new challenges. Colleges and universities can minimize the challenges and maximize the opportunities by implementing campus cyberinfrastructure strategies that focus less on the technology per se and more on advances in scholarship and learning—that is, strategies supporting the ‘scholarship’ in e-scholarship.”
It goes without saying that the cyberinfrastructure challenges experienced, applications and solutions found by the academic world are instructive for any organization managing and spreading knowledge. There are lessons to be learnt, mistakes to be avoided.
Note: Cross-posted at iCommons.org
Thu 13 Sep 2007
Thu 13 Sep 2007
I’ve been poking around an interesting commercial initiative called “Freebase“, an open access / open licensed (using the Creative Commons attribution license) web-based data sharing system developed by Metaweb. Metaweb is a commercial enterprise, and according to their FAQ they plan on making money through some sort of fee structure on using their API (translation for archaeologists: an interface enabling machine-to-machine communication). Here’s a link to other blogger reactions and with lots of interesting discussion of Freebase.
I haven’t had any luck finding out how Freebase works, or what its underlying architecture is like. Given the shape of the Metaweb logo (triple lobes), I can only guess they have an RDF data-store (a big database of RDF-triplets). We’ll have an opportunity to learn more shortly, because Robert Cook of Metaweb has kindly agreed to speak about these efforts in our Information and Service Design Lecture series (at the UC Berkeley School of Information).
(Editing note: Here is a much more complete description of Freebase’s conceptual organization. )
However, my first impressions of surfing through Freebase remind me lots of some of the data structures we’ve been using in Open Context, which is based on the OCHRE project’s ArchaeoML global schema (database structure). For example, Freebase seems to emphasize items of observation that have descriptive properties and contextual relationships with other items. Open Context works just like that, but, being designed for the field sciences and material collections, Open Context assumes observations have some spatial relationships with one another (especially spatial containment). The overall point is that these systems offer data contributors tremendous flexibility in how they organize and describe their observations, while still enabling interoperability and a common set of tools for exploring and using multiple datasets. It’s a way of sharing data without forcing people into inappropriate, rigid or over specified standards.
Freebase looks more flexible in this regard (being designed for a wider set of applications). Freebase clearly has lots more professionalism in design and execution, and has an incredibly interesting API. It’s also great to see tools for data authors to share schemas (ways of organizing and describing datasets). All this shows you what great talent and venture capital funding delivers, and I’m duly impressed (and maybe a little jealous)!
We’re just now looking at RESTful web services for Open Context, and Freebase may offer an invaluable model / or set of design parameters for opening up systems for machine-to-machine interactions. In fact, making Open Context “play well” with a powerful commercial service such as Freebase would offer great new opportunities for our user community (choices of interfaces and tools).
Archaeology is a broad and diverse discipline, and making sure archaeologists can easily move data between different tools (blogs, online databases, and visualization environments like Google Earth) is an important need. We should take a serious look at systems like Freebase to make sure we’re best serving our community when we build such “cyberinfrastructure” systems.
BTW, anyone is welcome to help work with us on an archaeological web-services project. Open Context, unlike Freebase (which is a service built on a commercial product), is open sourced and you can get the source code here. It might be fun to come up with interesting ways to connect Freebase with Open Context.