Reading the Wikipedia entry on Conway's Game of Life answers the question of why it was developed "Conway was interested in a problem presented in the 1940s by renowned mathematician John von Neumann, who tried to find a hypothetical machine that could build copies of itself and succeeded when he found a mathematical model for such a machine with very complicated rules on a rectangular grid." It's interesting that von Neumann's idea of self replication actually predates the discovery of DNA by a few years.
So I asked Google the question and someone has implemented a Turing machine inside Coway's Game of Life; way back in 2000. A book called "Collision Based Computing" and a applet called LogiCell (which uses Conway's Game of Life to do simple calculations) is available here.
Thursday, November 27, 2008
Wednesday, November 12, 2008
Indexing for Efficient SPARQL
Another interesting way of indexing triples: A role-free approach to indexing large RDF data dets in secondary memory for efficient SPARQL evaluation "We propose a simple Three-way Triple Tree (TripleT) secondary-memory indexing technique to facilitate efficient SPARQL query evaluation on such data sets. The novelty of TripleT is that (1) the index is built over the atoms occurring in the data set, rather than at a coarser granularity, such as whole triples occurring in the data set; and (2) the atoms are indexed regardless of the roles (i.e., subjects, predicates, or objects) they play in the triples of the data set. We show through extensive empirical evaluation that TripleT exhibits multiple orders of magnitude improvement over the state of the art on RDF indexing, in terms of both storage and query processing costs."
While looking around at arXiv I did a quick search and found two more interesting papers that seems related to a previous discussion on how the Semantic Web needs it's own programming language or I would say at least a way to process the web of data, both by Marko A. Rodriguez: "The RDF Virtual Machine" and "A Distributed Process Infrastructure for a Distributed Data Structure".
While looking around at arXiv I did a quick search and found two more interesting papers that seems related to a previous discussion on how the Semantic Web needs it's own programming language or I would say at least a way to process the web of data, both by Marko A. Rodriguez: "The RDF Virtual Machine" and "A Distributed Process Infrastructure for a Distributed Data Structure".
Tuesday, November 11, 2008
While you were away...
Now that I'm currently looking around for jobs, I came across a presentation on some of the work the the easyDoc project did at Suncorp, "Technical Lessons Learned Turning the Agile Dials to Eleven". It includes automating getter/setter testing, hibernate, and immutability. It's good to see the sophistication continued to increase after I left to reach quite a high level (like automatic triangulation and doing molecule level testing).
Subscribe to:
Posts (Atom)