tag:blogger.com,1999:blog-33221412024-03-08T12:46:47.481+10:00More NewsTechnical stuff sometimes about programming.Unknownnoreply@blogger.comBlogger2121125tag:blogger.com,1999:blog-3322141.post-8588481033023335272016-04-17T12:38:00.001+10:002016-04-17T19:49:46.547+10:00Slavery is the New Bacon<blockquote>
“We can barely decide whether or not bacon will cause health problems year over year, let alone the more complicated issues like politics and race.”</blockquote>
<a href="https://twitter.com/mtdegoes/status/713154088108576768">Matthew T De Goes</a> (<a href="https://pbs.twimg.com/media/CeYzriBVIAAWQ5H.jpg">screen capture</a>).<br />
<blockquote>
“Some people just don’t want a bad person invited to a tech conference, even if their talk was picked by a blind committee, they are peaceful, they reject any type of violence, and they don’t pose a safety threat.”
</blockquote>
<a href="http://degoes.net/articles/lambdaconf-controversy">Personal Thoughts on the LambdaConf Controversy</a>.<br />
<br />
The committee was blind to any favour or discrimination. How can anyone object to our objectivity?
The blindfold came off for a bit though and the LambdaConf committee found out some stuff:
<br />
<blockquote>
“Are these views racist and sexist? Absolutely, since they don’t admit the possibility that, for example, an asian female with no background in computer science might do a better job at “governance” than any white male software engineer.
Are these views endorsed by LambdaConf or held by any staff members? Hell, no!”
</blockquote>
<a href="http://amar47shah.github.io/pages/lambdaconf-yarvin-call-for-feedback.html">LambdaConf-Yarvin Controversy: Call for Feedback</a>.<br />
<br />
Having a blind submission process, getting people to sign up to a Code of Conduct and conducting a conference held purely on beliefs is a good ideal to aim at. It’s possible that this could’ve worked for LambdaConf.<br />
<br />
In contrast though, the LambdaConf organisers <a href="http://amar47shah.github.io/posts/2016-03-28-lambdaconf-yarvin.html">went looking into the background of the speaker, emailed other speakers and held a vote and wrote a few blog posts</a>. It shows a lack of confidence in that processes while undermining it at the same time. Maybe a more fully featured <a href="http://www.nature.com/nature/peerreview/debate/nature04988.html">open review</a> process would’ve been better.<br />
<br />
Blind reviews do nothing for inclusion or diversity and reinforce existing discrimination: “<a href="http://in.bgu.ac.il/women-forum/DocLib/articles/Buddenreplyonwhittakerfemaleauthorship3.pdf">Does double-blind review benefit female authors?</a>” and “<a href="http://www.pnas.org/content/108/8/3157.full">Understanding current causes of women’s underrepresentation in science</a>”. It's like waving the checkered flag at the end of a Formula 1 wondering why only rich people are finishing.<br />
<br />
The contradiction of LambdaConf is having a conference that touts its diversity and at the same time inviting someone who is against including certain groups of people. <a href="https://s3.amazonaws.com/sl-notes/yarvin.txt">Is Yarvin really the best guy for the job</a> — is he even trying? <a href="https://medium.com/@curtis.yarvin/why-you-should-come-to-lambdaconf-anyway-35ff8cd4fb9d#.2eo3vxqiz">No</a>. He just doubled down and justified his views.<br />
<br />
<a href="https://medium.com/@curtis.yarvin/why-you-should-come-to-lambdaconf-anyway-35ff8cd4fb9d#.2eo3vxqiz">In that post</a>, he makes it clear that Yarvin and Moldbug are the same person while saying the exact opposite. He’s saying, if you can’t tell the difference between the two, especially after thousands of words, it is you that has the problem not him. Don’t be confused, he’s blaming you — he’s not coming peacefully.<br />
<br />
He says he’s not racist but <a href="http://unqualified-reservations.blogspot.com.au/2009/07/why-carlyle-matters.html">Moldbug might be</a> (<a href="http://unqualified-reservations.blogspot.com.au/2009/03/gentle-introduction-to-unqualified.html">and another</a>). He talks about Carlyle, fascism (“no such thing as too much truth, too much justice, or too much order”), people as property (“we agree that he can sell himself into slavery”), and race is intelligence (“current results in human biodiversity”). <a href="http://www.econlib.org/library/Columns/LevyPeartdismal.html">It’s a regressive set of ideas — even in its own time</a>:<br />
<blockquote class="tr_bq">
“The alternative to markets was not socialism. There were socialist experiments, but there were no socialist economies. The alternative to market organization was slavery.”</blockquote>
<a href="https://fee.org/articles/150-years-and-still-dismal/">150 Years and Still Dismal!</a><br />
<br />
The purpose of conference is for networking and to learn. <a href="http://argumatronic.com/posts/2016-03-29-LambdaConf-sponsorship.html">It’s a place where people are going to teach children, and single mothers, parents, and anyone else who comes a long</a>. This will make a difference.<br />
<br />
The situation is that attendees will be able to see right through his poor disguise. It makes him a terrible teacher and the conference a poorer place at which to learn. The existence of a speaker, publicised in such a way reduces attendees <a href="https://gradstudies.ucdavis.edu/sites/default/files/upload/files/facstaff/stereotype_threat_and_inflexible_perseverance_in_problem_solving.pdf">ability to perform</a> — hurting who you’re trying to help.<br />
<br />
<a href="http://www.who.int/features/qa/cancer-red-meat/en/">Bacon is not good for you</a> and there is no slippery slope. <a href="https://alissapajer.github.io/posts/2016-03-26-lambdaconf.html">You pick who comes to your conference dependent on the size of the out-group you want to create</a>. <a href="http://braythwayt.com/2016/03/30/racism-is-injustice.html">Racism and slavery is socially engineered injustice — you’re denying people’s humanity</a> and in that way it reduces us all.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-3322141.post-39766647842749323312015-08-05T11:38:00.002+10:002015-08-25T15:50:23.307+10:00Using Ruby to Inject you a MonoidA monoid has an append operation (like plus) and an identity (like 0) and you get for free a concat operation.<br />
<br />
In Ruby it's something like:<br />
<br />
<div class="p1">
<span class="s1">[1,2,3].inject(0) {|a, x| a + x }</span></div>
<div class="p1">
=> 6<br />
<br />
Or just, [1,2,3].inject(:+)<br />
</div>
<div class="p1">
<br /></div>
<div class="p1">
In Haskell, you can even see it in the type signature for Monoid:</div>
<div class="p1">
mconcat :: [a] -> a</div>
<div class="p1">
<br /></div>
<div class="p1">
You can see the array on the left ([1,2,3]) and the result unpacked (just 6).</div>
<div class="p1">
<br /></div>
<div class="p1">
What if you want to take it up one level of abstraction and have any operations on your list of numbers. You just use a different monoid called Endo.</div>
<div class="p1">
<br /></div>
<div class="p1">
To take it to this next level you need a more abstract append and identity. </div>
<div class="p1">
<br /></div>
<div class="p1">
Append needs to combine two operations:</div>
<div class="p1">
</div>
<div class="p1">
compose = -> (f,g,*args) { f.call(g.call(*args)) }</div>
<div class="p1">
<br /></div>
<div class="p1">
And identity just returns what you give it:</div>
<div class="p1">
</div>
<div class="p1">
<span class="s1">id = -> (x) { x }</span></div>
<div class="p1">
<span class="s1"><br /></span></div>
<div class="p1">
<span class="s1">Which lets you then write:</span></div>
<div class="p1">
<span class="s1">
</span></div>
<div class="p1">
[->(x){x + 2}, ->(x){x * 7} ].inject(id) {|x, y| compose(x, y) }.call(8)</div>
<div class="p1">
<span class="s1">=> 58</span></div>
<div class="p1">
<span class="s1"><br /></span></div>
<div class="p1">
<span class="s1">Or in Haskell:</span></div>
<div class="p1">
<span class="s1">
</span></div>
<div class="p1">
<span class="s1">Prelude> let a = foldr (.) id [(+2), (*7)]</span></div>
<div class="p1">
<span class="s1">Prelude> a 8</span></div>
<div class="p1">
<span class="s1">58</span></div>
<div class="p1">
<br /></div>
<div class="p1">
See:</div>
<div class="p1">
</div>
<ul>
<li><a href="http://www.davesquared.net/2012/07/composition-via-scary-sounding-maths-terms.html">Composition via scary-sounding maths terms</a></li>
<li><a href="http://stackoverflow.com/questions/3136338/uses-for-haskell-id-function">Uses for Haskell id function</a></li>
</ul>
<br />
<div>
<br /></div>
<div class="p1">
<br /></div>
<div class="p1">
<br /></div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-3322141.post-4325122444603069932015-02-17T07:33:00.000+10:002015-02-18T04:25:48.284+10:00jQuery still not a MonadI read <a href="https://importantshock.wordpress.com/2009/01/18/jquery-is-a-monad/">jQuery is a Monad</a> and thought yeah this is pretty cool I finally understand Monads.<br />
<br />
jQuery is not a Monad, <a href="http://www.reddit.com/r/haskell/comments/ive1n/you_got_your_type_class_in_my_jquery_applicative/c26y5oj">a Monad can take any type</a> and <a href="http://www.quora.com/Is-jQuery-a-monad">it has a join operator that takes a doubly wrapped value and turn it into a singly wrapped one</a>.
This means that for it to be a Monad jQuery would have to work on any type, you have to be able to give it a String, Int or DOM and it operates on it consistently. <a href="http://api.jquery.com/map/">jQuery's .map</a> can only deal with the one type. It does have <a href="http://api.jquery.com/jquery.map/">jquery.map</a> but that would make the Array the Monad (or actually just <a href="http://en.wikibooks.org/wiki/Haskell/The_Functor_class">a Functor</a>) not jQuery.<br />
<br />
Many of jQuery's method are specific to DOM manipulation, parsing and the like and not related to Monads in anyway - more like a combinator library like <a href="https://wiki.haskell.org/HXT">HXT</a>.<br />
<br />
The idea that it is a Monad still continues with, <a href="http://thewebivore.com/jquery-can-teach-monads/">What jQuery can teach you about monads</a> and <a href="http://codeartisan.blogspot.com.au/2015/02/does-jquery-expose-monadic-interface.html">Does jQuery expose a monadic interface?</a>.
One of the points that I think people ignore is that JavaScript has an implicit <tt>this</tt> and it affects how you apply function:
<br />
<blockquote>
As is common with object-oriented language implementations, the <tt>this</tt> variable can be thought of as an implicitly-passed parameter, so we can then look through the API for a jQuery container looking for a method that takes one of these transformation callbacks and returns a new jQuery container.</blockquote>
This actually prevents you from easily (and definitely not clearly) writing Moands in JavaScript, <a href="http://blog.jorgenschaefer.de/2013/01/monads-for-normal-programmers.html">Monads in the generic fashion that is required</a>:
<br />
<blockquote>
So, is jQuery or the Django ORM a monad? Strictly speaking, no. While the monad laws actually hold, they do so only in theory, but you can not actually use them in those languages as readily as you can in, say, Haskell. Methods get the object as the first (implicit, in JavaScript) argument, not the value(s) stored in the object. Methods are not first class objects independent from their classes. You can circumvent those restrictions by implementing some boiler code or, in Python, metaclasses that do some magic. What you get for doing that is a much easier time writing functions that work on all monadic classes, at the expense of making the whole concept more difficult to understand.</blockquote>
As Ed said: "jQuery is an amazing combinator library, but it isn't a functor, it isn't applicative, and it isn't a monad."Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-3322141.post-43133594161424134312014-05-05T10:04:00.002+10:002014-05-06T09:11:10.401+10:00Recovering from ElasticSearch RecoveriesWe recently had a problem with <a href="http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-snapshots.html">ElasticSearch's snapshots</a> where a shard (a directory) was failing because it was missing the metadata file and data files.<br />
<div>
<br /></div>
<div>
This leads to a couple of criticisms of the snapshot directory format. Primarily, it takes files with reasonable extensions, generally Lucene files, and creates files like "__1" and then records a mapping from "__1" to "_def.fdt". For example:<br />
<div>
<br /></div>
<div class="code">
{<br />
"name" : "es-trk_allindices_2014-01-01_0000est",<br />
"index-version" : 78683,<br />
"files" : [ {<br />
"name" : "__0",<br />
"physical_name" : "_abc_0.pay",<br />
"length" : 2012,<br />
"checksum" : "13m617n",<br />
"part_size" : 104857600<br />
}, {<br />
"name" : "__1",<br />
"physical_name" : "_def.fdt",<br />
"length" : 97744833,<br />
"checksum" : "239wze",<br />
"part_size" : 104857600<br />
}<br />
...</div>
<div>
<br />
The files aren't event located together in the metadata file. In Lucene, you have a group of files prefixed with say "_def" like <a href="http://lucene.apache.org/core/2_9_4/fileformats.html">fdt, fdx, tip, tim, del, nvm, and nvd</a> in a single directory. Losing the metadata file means not only losing the helpful filenames but also their <a href="http://hackerlabs.org/blog/2011/10/01/hacking-lucene-the-index-format/">groupings used by Lucene</a>.</div>
</div>
<div>
<br /></div>
<div>
Luckily, <a href="http://blog.jpountz.net/post/33247161884/efficient-compressed-stored-fields-with-lucene">ElasticSearch uses FDT files</a> which have just enough information - the unique index identifier and then the payload - to turn them into a CSV or other file to be able to be reimport the data into ElasticSearch. If you have the same problem you will have to <a href="http://elasticsearchserverbook.com/reroute-api-explained/">force shard allocation</a> or create an empty shard in a new cluster, delete the failed shard and copy that shard into the failed one.</div>
<div>
<br /></div>
<div>
The utility, <a href="https://github.com/OtherLevels/es_fdr">es_fdr</a>, reads FDT files and outputs them one field per line, it's available on the <a href="https://github.com/OtherLevels">OtherLevels Github page</a>. I've also updated a <a href="https://issues.apache.org/jira/browse/LUCENE-4706">related Lucene ticket</a>.</div>
<div>
<br /></div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-3322141.post-41968633767653126842013-11-17T19:31:00.002+10:002013-11-17T19:48:33.172+10:00Make "Enter" in Twitter Typeahead Select the First ItemThis is just a quick post which may not be applicable for long but fixed the problem I had where I was trying to get the first item even if you hadn't selected it with the mouse or cursor.<br />
<br />
<div class="code">$('input.typeahead').keypress(function (e) {<br />
if (e.which == 13) {<br />
var selectedValue = $('input.typeahead').data().ttView.dropdownView.getFirstSuggestion().datum.id;<br />
$("#input_id").val(selectedValue);
$('form').submit();
return true;<br />
}<br />
});<br />
<br/></div>
I appended the same information to this <a href="https://github.com/twitter/typeahead.js/issues/332">GitHub issue</a>.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-3322141.post-71686577379195383032013-08-23T12:10:00.005+10:002013-08-23T12:10:53.888+10:00Grace Hopper on Programmers and ProgrammingI've started to read "<a href="http://www.amazon.com/Show-Stopper-Breakneck-Generation-Microsoft/dp/product-description/0029356717">Show Stopper!</a>" and it has an excellent part in the first chapter about Grace Hopper, who created the first compiler that basically created the jobs that modern programmers perform:
<br />
<blockquote class="tr_bq">
"Hopper was convinced that overcoming the difficulties posed by proliferating computer languages would rank among the greatest technical challenges of the future. "To me programming is more than an important practical art," she said in a lecture in 1961. "It is also a gigantic undertaking in the foundations of knowledge." Ironically, she fretted that the greatest barrier to progress might come from programmers themselves. Like converts to a new religion, they often displayed a destructive closed-mindedness bordering on zealotry. "Programmers are a very curious group," she observed. </blockquote>
<blockquote class="tr_bq">
They arose very quickly, became a profession very rapidly, and were all too soon infected with a certain amount of resistance to change. The very programmers whom I have heard almost castigate a customer because he would not change his system of doing business are the same people who at times walk into my office and say, "But we have always done it this way." It is for this reason that I now have a counterclockwise clock hanging in my office."</blockquote>
I would love to know what the name of the lecture was and if there were any transcripts or copies of it around.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-3322141.post-45390876556715077832013-07-15T16:09:00.002+10:002013-11-17T19:46:57.980+10:00Copying between two uploaders in CarrierWaveTo copy between two cloud providers using CarrierWave and Fog is a bit tricky. Copying from one provider to a temporary file and then storing it in the other seems to work but the problem is that the file name is not preserved. If you wrap the temporary file in a SanitizedFile then Carrierwave will update the content without changing the name of the file.<br />
<br />
The following code preserves the file name between providers (where "obj" is the model, "source" is one uploader and "destination" is the other):<br />
<br />
<div class="code">
def copy_between_clouds(obj, src, dest)<br />
tmp = File.new("/tmp/tmp", "wb")<br />
begin<br />
filename = src.file.url<br />
File.open(tmp, "wb") do |file|
<br />
file << open(filename).read<br />
end<br />
t = File.new(tmp)<br />
sf = CarrierWave::SanitizedFile.new(t)<br />
dest.file.store(sf)<br />
ensure<br />
File.delete(tmp)<br />
end<br />
end
<br />
</div>
To use it:<br />
<div class="code">
copy_between_clouds(o, o.old_jpg, o.new_jpg)</div>
You might need to change "src.file.url" to "src.file.public_url" for some cloud providers.
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-3322141.post-4984529516801195812013-04-26T10:43:00.001+10:002013-07-15T20:01:46.557+10:00Elliott, Dina and SteveI was reading "<a href="http://www.cs.utexas.edu/users/hunt/research/hash-cons/hash-cons-papers/michie-memo-nature-1968.pdf">'Memo' Functions and Machine Learning</a>" again. It's an interesting article, appearing in Nature, before an article about mammalian reproduction, and uses balancing a pole on a trolley as an example of artificial intelligence. <br>
<br>
In the paper, the trolley is controlled in real-time by two computers: a PDP-7 and an <a href="http://www.memtsi.dsi.uminho.pt/ocr/ncr_4100_software.pdf">Elliott 4100</a>. I hadn't heard of the 4100 before but the Elliott and others come from the <a href="http://www.ourcomputerheritage.org/photos.htm">start of the British computing industry</a> - including others you may never have heard of like LEO and English Electric Computers. You can read more about them in "<a href="http://www.cs.man.ac.uk/CCS/res/res41.htm#d">Early Computer Production at Elliotts</a>" and "<a href="http://www.amazon.com/Moving-Targets-Elliott-Automation-Computer-Computing/dp/1848829329">Moving Targets - Elliott Automation and the Dawn of the Computer Age in Britain 1947-1976</a>" (<a href="http://www.cs.man.ac.uk/CCS/res/res55.htm#c">review of the book</a>).<br>
<br>
One of the <a href="http://www.ourcomputerheritage.org/E2%20405%20Norwich.htm">pictures in the Elliott computer archives has the caption</a>, "<span style="text-align: -webkit-center;">Switching on the Elliott 405 at Norwich City Council in 1957. The woman to the right is Dina Vaughan (later Dina St Johnston), who did the initial programming for the Norwich system." In 1959, she was the first person to start a UK software house. This being a company that only wrote software - not software that came bundled with hardware. </span>The first, <a href="http://en.wikipedia.org/wiki/Software_industry">according to Wikipedia</a>, was Computer Usage Company in 1955.<br>
<span style="text-align: -webkit-center;"><br></span>
The best resource on her I could find was in "The Computer Journal" called "<a href="http://comjnl.oxfordjournals.org/content/52/3/378.full">An Appreciation of Dina St Johnston (1930–2007)</a>". It describes how she was writing software in the mid-50s making her a contemporary of people like Michie, Turing, von Neumann and Godel. It describes what programming was like then:<br>
<blockquote class="tr_bq">
"She wrote with a Parker 51 fountain pen with permanent black ink and if there ever was a mistake it had to be corrected with a razor blade. Whereas the rest of us tested programs to find the faults, she tested them to demonstrate that they worked."</blockquote>
One of the first commercial jobs for the company was a control system for the <a href="http://en.wikipedia.org/wiki/Calder_Hall_nuclear_power_station#Calder_Hall_nuclear_power_station">first industrial nuclear power plant</a>. Her company, Vaughan Programming Services, was visited on the <a href="http://www.electronicsweekly.com/blogs/david-manners-semiconductor-blog/2007/11/10th-anniversary-of-uk-softwar.html">10th anniversary of the British software industry</a> by "Electronic Weekly":<br>
<blockquote class="tr_bq">
"The staff in a company run by a woman might be expected to contain a high proportion of women, and this expectation is fulfilled", runs the EW report, "but, unexpectedly, a low proportion of the professionals employed have degrees, and there is no great emphasis on strong mathematical background in the mix of skills used."</blockquote>
The industry norms don't seem to have changed very much. More details can be found on Google Books by searching, "<a href="https://www.google.com/search?q=dina+vaughan&btnG=Search+Books&tbm=bks&tbo=1">Dina Vaughan</a>" (or St Johnston).<br>
<br>
In "<a href="http://www.amazon.com/Recoding-Gender-Changing-Participation-Computing/dp/0262018063">Recoding Gender</a>", Dina St Johnston is mentioned along with another female programming pioneer, <a href="http://en.wikipedia.org/wiki/Steve_Shirley">Dame Stephanie Shirley</a>. A refugee of World War II, she entered the software industry as a "late pioneer". She became interested in programming and got into the computer room by sweeping up chads, "I could not believe that I could be payed so much for something I enjoyed so much...early software was fascinating...it was so engrossing." In 1962 she started "Freelance Programmers", the second software company founded by a woman in the UK. Her view of the computing industry seems to be one that offered a way to address social and economic problems, "a crusade", a flexible workplace with policies designed to support women with dependents. Originally designed to help women with children to continue to work, its charter gradually became more broad to include supporting women's careers, then for women with any dependent and in 1975 was expanded, by law, to include men. The final mission became "people with dependents who couldn't work in the conventional environment". She says in her biography, the company had always employed some men and at the time of the passing of the equal opportunities law three of the 300 programmers and a third of the 40 systems analysts were male.<br>
<br>
A Guardian article, written in 1964, quoted in "<a href="http://books.google.com.au/books?id=OZ89AAAAIAAJ&lpg=PA66&dq=%22Much%20of%20the%20work%20is%20tedious%2C%20requiring%20great%20attention%20to%20detail%22&pg=PA66#v=onepage&q=%22Much%20of%20the%20work%20is%20tedious,%20requiring%20great%20attention%20to%20detail%22">Dinosaur and Co</a>", about Shirley and the early IT industry:<br>
<blockquote class="tr_bq">
"The main qualification is personality...Much of the work is tedious, requiring great attention to detail, and this is where women usually score...Mrs Steve Shirley...has found in computer programming an outlet for her artistic talents in the working out of logical patterns.</blockquote>
<blockquote class="tr_bq">
Now retired with a young baby, she has found that computer programming, since it needs only a desk, a head and paper and pencil, is a job that can be done from home between feeding the baby and washing the nappies. She is hoping to interest other retired programmers in joining her work on a freelance basis."</blockquote>
The difficulties in starting a software company in the 1950s and 1960s seem immense. There was the idea that you couldn't sell software, that it didn't have any value as a product or a service by itself, as customers expected it to be free with the hardware. Then there is the inequality and sexism. She called herself "Steve" as no one responded to her business letters when she used "Stephanie". Banks also required written permission from a man so that a woman could open a bank account. Furthermore, almost all companies and the public service required or expected women to leave their job when they married or had their first child, so you "retired with a young baby". One of the few ways women could continue to work was to start their own company.<br>
<br>
She mentions her title was for "services to the industry" and as any good programmer does, she defines Dames: "...recursively by saying, a Knight is a male Dame". She recently released a biography called "<a href="http://www.amazon.co.uk/Let-Go-Entrepreneur-Turned-Philanthropist/dp/1782342826">Let IT Go</a>" which includes many personal struggles as well as parts that are a more practical, British version of "<a href="http://www.amazon.com/Lean-In-Women-Work-Will/dp/0385349947">Lean In</a>".<br>
<br>
You can listen to her talk in "<a href="http://podcasts.ox.ac.uk/life-story-pioneer-hi-tech-philanthropy-audio">The Life Story of a Pioneer: From Hi-tech to Philanthropy</a>" (the subject of about IT and running a software company begins about 12 minutes in, the second half of the talk is dedicated to her philanthropy mostly for autism). There's also an <a href="http://www.oii.ox.ac.uk/people/?id=63#tab_webcasts">earlier recorded video of that talk and others on her University of Oxford page</a>.<br>
<br>
The early British IT industry wasn't only about commercialising military projects or solving hardware and software problems but it was a way of affecting social change - to allow more people to work more flexibly.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-3322141.post-19112141776469915622013-01-31T13:20:00.000+10:002013-01-31T13:26:33.698+10:00Removing Large Files from GitWhen I've used git I've used it pretty much CVS, SVN and any other version control system I've used before - I've checked in binary files. Whether that's dlls or jars or gems I've checked them in. Pretty much everywhere I've worked people have said this is a problem and I've tended to argue it solves a lot of problems - one of the main ones is when where the repositories and management software along with that fails - I still have a working system from a checkout/clone/etc.<br />
<div>
<br /></div>
<div>
The price of this is that sometimes you need to cleanup old binary files. Git makes this complicated but once you've found a couple of tools then it's relatively straightforward.</div>
<div>
<br /></div>
<div>
Stackoverflow has a Perl and Ruby script that wraps around a few git commands to list all files in a repository that's above a certain file size "<a href="http://stackoverflow.com/questions/298314/find-files-in-git-repo-over-x-megabytes-that-dont-exist-in-head">Find files in git repo over x megabytes, that don't exist in HEAD</a>". The main gist of it is (in Ruby):<br />
<br /></div>
<pre>IO.popen("git rev-list #{head}", 'r') do |rev_list|</pre>
<pre> rev_list.each_line do |commit|</pre>
<pre> commit.chomp!
for object in `git ls-tree -zrl #{commit}`.split("\0")</pre>
<pre> bits, type, sha, size, path = object.split(/\s+/, 5)
size = size.to_i</pre>
<pre> big_files[sha] = [path, size, commit] if size >= treshold
end
end
end
big_files.each do |sha, (path, size, commit)|
where = `git show -s #{commit} --format='%h: %cr'`.chomp
puts "%4.1fM\t%s\t(%s)" % [size.to_f / Megabyte, path, where]
end
</pre>
<br />
<div>
Then to remove the old files from the repository:<br />
<br /></div>
<pre>git filter-branch --force --index-filter 'git rm --cached --ignore-unmatch [full file path]' -- --all
git push --force
</pre>
<pre></pre>
<div>
<br />
Then to cleanup any used space in the git repository:<br />
<br /></div>
<pre></pre>
<pre>rm -rf .git/refs/original/
rm -rf .git/logs/
git reflog expire --expire=now --all
git gc --aggressive --prune=now
</pre>
<div>
<h1 itemprop="name" style="background-color: white; border: 0px; font-family: 'Trebuchet MS', 'Liberation Sans', 'DejaVu Sans', sans-serif; font-size: 23px; line-height: 1.3; margin: 0px 0px 7px; padding: 0px; vertical-align: baseline;">
</h1>
</div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-3322141.post-21891232045716914382012-12-29T08:09:00.002+10:002013-01-31T16:22:36.940+10:00Transparent SalariesThe stereotype is that developers are notoriously bad at human interactions. I'd suggest that developers are notoriously bad at interactions that they see as fake. Things like small talk and negotiations. In a developers mind or to be honest mine at least, the ability to get paid well or to pay less than retail for a product shouldn't be based on your ability to pretend your friendly with someone you're not, it should be based on an some sort of system. Why not create a self consistent system over relying on interacting with people?<br />
<br />
With this in mind I decided to try to create a transparent system at work to handle salaries. The problems I see with the way traditional salary is handled, especially the lack of transparency, include:<br />
<ul>
<li>It combines performance with remuneration,</li>
<li><a href="http://joelonsoftware.com/articles/ladder.html">Programmers are notoriously bad at valuing themselves</a>, communicating it with others and ensuring that they are adequately paid during a job interview or while employed,</li>
<li>It prevents an objective assessment of what your roles and responsibilities are in the organisation,</li>
<li>It lacks an acknowledgement of what you skills are worth in the job market,</li>
<li>It creates two groups: management and developers. This allows a combative attitude to be created and is used to justify why developers shouldn't trust business people and management,</li>
<li>People tend to find out anyway.</li>
</ul>
<div>
Some of these points I'll admit are difficult to solve whether it's a transparent system or not. However, the last two points, which I think are especially toxic, can be solved with a transparent system. In a closed salary system, people are encouraged to secretly find out what other people are worth and to provoke comparisons between each other. The time periods are often long and the information often incorrect. If a system is transparent you can solve that problem by making the information accurate and positive.<br />
<br />
People tend to ask, "Why does Mary earn more than me?" I think I'm a better programmer/analyst/whatever than she is. Was it just because Mary started when the company had more money?</div>
<div>
<br /></div>
<div>
Joel Spolsky is probably one of the key influences that I've seen on having transparent salaries. For example, "<a href="http://www.inc.com/magazine/20090401/how-hard-could-it-be-employees-negotiate-pay-raises.html">Why I Never Let Employees Negotiate a Raise</a>":</div>
<blockquote class="tr_bq">
"...we knew that we wanted to create a pay scale that was objective and transparent. As I researched different systems, I found that a lot of employers tried to strike a balance between having a formulaic salary scale and one that was looser by setting a series of salary "ranges" for employees at every level of the organization. But this felt unfair to me. I wanted Fog Creek to have a salary scale that was as objective as possible. A manager would have absolutely no leeway when it came to setting a salary. And there would be only one salary per level."</blockquote>
The <a href="http://joelonsoftware.com/articles/ladder.html">Fog Creek Ladder</a> is based on the idea of achieving <a href="http://construx.com/?nid=244">a certain level of capability</a>. The problem I had with the Fog Creek solution was that it seemed to suggest, especially in the skills level, that a programmer starts off needing help and working together and then slowly achieves the abilities to work by themselves. Whereas, where I work we wanted to do the opposite - as you get better at programming you get better at being able to explain, to listen and work with others. I think this is especially important if you want to work in an <a href="http://va312ozgunkilic.wordpress.com/2010/12/07/archigram-plug-in-city/">environment with living architecture</a>.<br />
<br />
So the inputs are simply what do you do - this should be objective and easy to see (again we're assuming a more transparent work environment where work is checked in or on a Wiki - if it's not shared you haven't done it). It's assumed that you perform well - if you're not, you're not doing your job. You can argue your role and performance separately to salary as these are assumed correct coming in.<br />
<br />
The other input to this is local salary. As Joel has said, if salaries rise quickly or fall sharply then the employees' salary should too.<br />
<br />
With this is mind there were three factors we used to calculate salary:<br />
<ol>
<li>Experience (4 bands - 0-2 rating),</li>
<li>Scope of Responsibility (0-5 rating) and</li>
<li>Skill Set (0-5 rating).</li>
</ol>
<div>
Experience has the least weight and is geared heavily towards moving from graduate to intermediate (three bands over 5 years) and maxing out after 15 years. </div>
<div>
<br /></div>
<div>
The scope of your responsibilities starts with the ability to make small technical decisions, to libraries used, and finally to cross product decisions. This doesn't mean that we have architect roles though, it means that if you are making these decisions that's what you get paid, not the other way around.</div>
<div>
<br /></div>
<div>
Skill set is pretty much technical abilities with an emphasis on the ability to break work up into different levels of tasks. Being able to create tasks from features, feature from iterations, iterations from epics, and charting a course across product cycles and customers.</div>
<div>
<br /></div>
<div>
The next part is how to we find an objective measure of salaries to match the levels we've created. I found a <a href="http://www.itcom.com.au/images/media/Itcom-Salary-Guide-qld-2012.pdf">Queensland salary guide</a>:</div>
<table>
<thead>
<tr>
<th>Software</th>
<th>Junior</th>
<th>Intermediate</th>
<th>Senior</th>
</tr>
</thead>
<tbody>
<tr>
<td>Analyst Programmer - J2EE</td>
<td>$60,000</td>
<td>$90,000</td>
<td>$110,000</td>
</tr>
<tr>
<td>Analyst Programmer - MS.Net</td>
<td>$60,000</td>
<td>$90,000</td>
<td>$120,000</td>
</tr>
<tr>
<td>Analyst Programmer - Other</td>
<td>$60,000</td>
<td>$85,000</td>
<td>$110,000</td>
</tr>
<tr>
<td colspan="1">Applications / Solutions Architect</td>
<td colspan="1">$100,000</td>
<td colspan="1">$140,000</td>
<td colspan="1">$180,000</td>
</tr>
<tr>
<td colspan="1">Team Leader - J2EE</td>
<td colspan="1">$90,000</td>
<td colspan="1">$108,000</td>
<td colspan="1">$117,000</td>
</tr>
<tr>
<td colspan="1">Team Leader - MS.Net</td>
<td colspan="1">$85,500</td>
<td colspan="1">$100,000</td>
<td colspan="1">$122,000</td>
</tr>
<tr>
<td colspan="1">Team Leader - Other</td>
<td colspan="1">$81,000</td>
<td colspan="1">$90,000</td>
<td colspan="1">$99,000</td>
</tr>
</tbody>
</table>
<br />
<div>
The main problem with these guides is the lack of acknowledgement of cross functional abilities. They tend to break out employees by traditional titles like: system administrator, database administrator, support roles, architect and programming. These are all roles that I expect everyone to be able to do. We picked the highest programmer category (MS.Net) but you could argue that it would be higher based on ability to handle iterations, customers and architecture (so maybe between $60,000 and $180,000).<br />
<br />
Our version of <a href="http://joelonsoftware.com/articles/ladder.html">Joel's ladder</a>:<br />
<br /></div>
<table>
<tbody>
<tr>
<th>Experience</th>
<th colspan="6">Average of Scope and Skills</th>
</tr>
<tr>
<td colspan="1"></td>
<td colspan="1">0</td>
<td colspan="1">1</td>
<td colspan="1">2</td>
<td colspan="1">3</td>
<td colspan="1">4</td>
<td colspan="1">5</td>
</tr>
<tr>
<td>Graduate</td>
<td>0</td>
<td>1</td>
<td>2</td>
<td>3</td>
<td>4</td>
<td>5</td>
</tr>
<tr>
<td>Junior</td>
<td>1</td>
<td>2</td>
<td>3</td>
<td>4</td>
<td>5</td>
<td>6</td>
</tr>
<tr>
<td colspan="1">Intermediate</td>
<td colspan="1">1.5</td>
<td colspan="1">2.5</td>
<td colspan="1">3.5</td>
<td colspan="1">4.5</td>
<td colspan="1">5.5</td>
<td colspan="1">6.5</td>
</tr>
<tr>
<td colspan="1">Senior</td>
<td colspan="1">2</td>
<td colspan="1">3</td>
<td colspan="1">4</td>
<td colspan="1">5</td>
<td colspan="1">6</td>
<td colspan="1">7</td>
</tr>
</tbody>
</table>
<br />
The maximum score is 7 with the base values starting from your experience (0-2).<br />
<div>
<br />
So our "developer" salary was:</div>
<table>
<thead>
<tr>
<th>Graduate
</th>
<th>Junior
</th>
<th><div>
Intermediate</div>
</th>
<th><div>
Senior</div>
</th>
</tr>
</thead>
<tbody class="">
<tr>
<td>$40,000</td>
<td>$60,000</td>
<td>$90,000</td>
<td>$120,000</td>
</tr>
</tbody>
</table>
<br />
<div>
With each point (from the previous table) being weighted at $11,400 ($80,000 difference / 7 points) which means that if the points came out to a non-whole number you can calculate between those grades - a 6.3 would be $111,820 ($40,000 + 6.3 * $11,400). What might be a bit confusing is that $40,000 is really the minimum and $120,000 is the maximum.</div>
<div>
<br /></div>
<div>
Overall I think this is a better system than negotiating up front and then at regular intervals (usually before or after a project). It reeks of an up front heavy process. I wonder if it really needs to be? Salary seems to be one of the last things that isn't considered a continuous process - like most things in software development now are. You turn salary into another feedback process by making it transparent. Could you turn salary into an iterative process? Could you iterate on it more quickly than yearly, to possibly monthly or weekly?<br />
<br />
While the inputs are supposed to be objective you can't say this process is value free. We've made choices over what we think is more important. As with many of these processes getting agreement maybe harder than setting up the initial process. This might as hard as trying to retroactively apply a coding standard.<br />
<br />
The only negative I can think of is if you're a person (especially in business) that believes that everything is a negotiation and don't leave anything on the table. This is where I think the developer vs business idea comes in. I think it's an overall cultural negative - especially if these are the same people who are creating customer contracts and the like. As a developer you want to work with your customers and business people.<br />
<br />
<b>Update: </b>"<a href="http://online.wsj.com/article/SB10001424127887323644904578272034121941000.html">Psst...This Is What Your Co-Worker Is Paid</a>":<br />
<blockquote class="tr_bq">
<span style="background-color: white; font-family: Arial, Helvetica, sans-serif; font-size: 15px; line-height: 22.5px;">Little privacy remains in most offices, and as work becomes more collaborative, a move toward greater openness may be inevitable, even for larger firms...</span><span style="font-family: Arial, Helvetica, sans-serif; font-size: 15px; line-height: 22.5px;">But open management can be expensive and time consuming: If any worker's pay is out of line with his or her peers, the firm should be ready to even things up or explain why it's so...</span><span style="font-family: Arial, Helvetica, sans-serif; font-size: 15px; line-height: 22.5px;">And because workers can see information normally kept under wraps, they may weigh in on decisions, which can slow things down, company executives say.</span> </blockquote>
<blockquote class="tr_bq">
<span style="font-family: Arial, Helvetica, sans-serif; font-size: 15px; line-height: 22.5px;">Once employees have access to more information, however, they can feel more motivated.</span></blockquote>
</div>
Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-3322141.post-56625931056972304982012-10-02T13:44:00.000+10:002013-01-31T13:23:16.714+10:00Imagination AmplifierThis is a reproduction of an article that appears in <a href="http://archive.org/details/1988-05-computegazette">Compute's Gazzette, Issue 59</a>. I'm reproducing it here because I think it's particularly good and on archive.org appears in formats where it's unlikely to be found again. <a href="http://archive.org/stream/1987-11-compute-magazine/Compute_Issue_090_1987_Nov#page/n65/mode/1up">Alternative link to his November's COMPUTE! article</a>.<br />
<br />
<h2>
<a href="http://archive.org/stream/1988-05-computegazette/Compute_Gazette_Issue_59_1988_May#page/n55/mode/1up">Worlds Of Wonder - WOW!</a></h2>
In this month's mailbag I received a letter from Art Oswald of Goshen, Indiana. Art was responding to my article in the <a href="http://www.atarimagazines.com/compute/issue90/The_World_Inside_The_computer.php">November COMPUTE! magazine about computers of the future</a>. He wrote: "In the future, the phrase 'I wonder' will become obsolete. I won't have to wonder what would happen if, or wonder what something was like, or wonder how something might be. I would just ask my computer, and it would simulate by means of holographic projection anything my imagination could come up with."<br />
<br />
Now, I ask you, Art, is this something to look forward to or something to dread?<br />
<br />
I have a new science-fiction book coming out which deals with this subject — the effect of computers (and electronic media, in general) on the human imagination. The book is <i><a href="http://www.abebooks.com/9780312930813/Robot-Odyssey-Escape-Robotropolis-DIgnazio-031293081X/plp">Robot Odyssey I: Escape from Robotropolis</a></i> (Tor Books, April 1988). Listen to two teenage boys carrying on a conversation in the year 2014:<br />
<blockquote class="tr_bq">
<i>We think plenty using computers, but we don't imagine. We don't have to imagine what the fourth dimension is, or what will happen if we combine two chemicals, or what the dark side of the moon looks like. The computer is there a step ahead of our imagination with its fantastic graphics, cartoons, and music. We no longer imagine because the computer can do our imagining for us.</i> </blockquote>
<blockquote class="tr_bq">
<i>
"So why imagine?" Les said. "My pop says most people's imaginations are vague and fuzzy anyway. If the computer imagines stuff for them, it'll probably be a big improvement.</i></blockquote>
Les is right. If the computer "imagines" something, it is usually based on a database of facts, the vision of an artist, or a scientific model created by experts. How could our puny imaginations compete with images that are this inspired, detailed, and exact?<br />
<h2>
Frontiers Of Knowledge </h2>
<div>
Science-fiction writers think a lot about new worlds of wonder. It is the human desire to "go boldly where no man has gone before" that is among our more noble impulses. It may even be the "engine" that drives us to innovate, invent,
and take risks. Without this engine, we might sink into a kind of emotional and intellectual swamp. Life could become extremely boring. Every time we contemplated a decision, we would first ask our computer, "What if?" and see what the consequences might be. Knowing too much might even paralyze us and cool our risk-taking ardor.</div>
<h2>
Imagination Amplifiers</h2>
<div>
Art writes that the phrase <i>I wonder</i>
may be rendered obsolete by computers, but I'm not certain that he's right. Instead, I think that we could use computers to stimulate our imagination and make us wonder
about things even more.<br />
<br />
Where does our imagination come from? I picture the imagination as a Lego<sup><span style="font-size: xx-small;">TM</span></sup> set of memory blocks stuffed into the toy chest of our mind. When we imagine something, we are quickly and intuitively building a tiny picture inside our heads out of those blocks. The blocks are made up of images,
tastes, smells, touches, emotions, and so on — all sorts of things that
we've experienced and then tucked away in a corner of our minds. The quality of what we imagine depends on three things: how often we imagine, the quantity and diversity of blocks that we have to choose from, and our ability to combine the blocks in original — and piercingly true — ways.<br />
<br />
Most of us have "pop" imaginations created from images supplied to us by pop culture. We read popular books, see popular movies, watch the same sitcoms and commercials, and read the same news stories in our newspapers. It's no wonder that much of what we
imagine is made up of prefab structures derived, second hand, from society's small group of master "imagineers." Electronic media has made it possible for these imagineers to distribute their imaginations in irresistible packages. If you have
any doubt, ask an elementary school teacher. Her students come to school singing jingles from commercials and write "original" compositions which really are thinly
disguised copies of toy ads, movies, and Saturday morning cartoons.<br />
<br />
Where does the computer fit into this picture? It could be our biggest defense against the imagination monopoly which the dispensers of pop culture now have. If
we can tell the computer "I wonder" or ask it "What if?" it will work with us to build compelling images of what we imagine. If the process is interactive, and we can
imagine in rough drafts, then we can polish, ornament, and rework our images as easily as a child working with sand on a beach. Then maybe the images inside our
heads will be from imagination experiments that we do with our computers and not stale, leftover images pulled from the refrigerator of pop culture.</div>
<div>
<br /></div>
<div>
Fred D'Ignazio, Contributing Editor</div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-3322141.post-70640601584944943692012-09-20T14:01:00.000+10:002012-09-20T14:40:27.113+10:00Not Much BetterI've been reading, "<a href="http://www.amazon.com/Question-Truth-Christianity-Homosexuality/dp/0826459498">A Question of Truth</a>" which is primarily about homosexuality in the Catholic church and references to it in the Bible. It has a lengthy, careful but very easily read introduction, explaining many things to do with currently held views, the difference between the acts from intents, the damage it does to people and carefully describing the different aspects of sexuality, separating all the issues well and does a reasonably good job of describing the difference between intensional and extensional usage.<br />
<br />
A lot of this is Bible study 101 - the modern ideas like <a href="http://en.wikipedia.org/wiki/Romance_%28love%29#Historical_definition_of_romantic_love">love</a>, homosexuality, marriage, property, slavery, and so on have moved or did not exist when the Bible was written, so what people often read into it is not the original intent - not that I would say that the original intent is much better - and that's the real problem.<br />
<br />
The book effectively reasons around all the major passages that people use to treat gay people badly. However, in the course of the reasoning, it just seems to move away from from treating homosexuality as sinful to refining women's historical position in society.<br />
<br />
For example, the infamous Leviticus and <a href="http://en.wikipedia.org/wiki/The_Bible_and_homosexuality#Leviticus_18_and_20">men not lying with men passage</a> is reasoned to mean not the act that is wrong but that a man shouldn't treat a man like a woman. Another is the story of <a href="http://en.wikipedia.org/wiki/Sodom_and_Gomorrah">Lot and our friends the Sodomites</a>, which again is about offering up your daughters for hospitality reasons and the suggestion is that Sodom was destroyed because they humiliated them not because of any need for gay love.<br />
<br />
There's a sentence or two along the lines that no modern Christian would treat women in this way (or have slaves?) which I thought rather undermines the whole point of the exercise to me.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-3322141.post-14433409778245595342012-05-18T06:13:00.002+10:002013-02-01T09:25:46.146+10:00Constructivism - Why You Should CodeI think <a href="http://www.codinghorror.com/blog/2012/05/please-dont-learn-to-code.html">this article on why you shouldn't code</a> is wrong. It's wrong in a way that I was wrong in high school that I would never need to know German, art or biology. It's wrong in the way I was wrong about never needing to know set theory or relational theory or category theory. But it's also wrong in the ways I will never really know, "<a href="http://www.papert.org/articles/ComputerAsCondom.html">Computer As Condom</a>":
<br />
<blockquote>
Debbie hated math and resisted everything to do with it. Tested at the bottom of the scale. She learned to program the computer because this let her play with words and poetry, which she loved. Once she could write programs she found a way to tie fractions into words and poetry. Writing witty programs about fractions led her to allow herself to think about these previously horrible things. And to her surprise as much as anyone's her score on a fractions test jumped into the upper part of the scale.</blockquote>
What you do as a job programming in C#, Java, JavaScript or whatever has very little to do with the way people use coding to <a href="http://en.wikipedia.org/wiki/Constructionist_learning">learn about learning</a>. That's the most disappointing thing about the article. It is the terrible idea that learning how to code lessens the world if you do it wrong. Learn to <a href="http://blip.tv/open-source-developers-conference/temporally-quaquaversal-virtual-nanomachine-programming-in-multiple-topologically-4466153">code backwards in time in Latin in Perl</a> but don't listen to anyone who says you shouldn't code.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-3322141.post-27535881493337448242012-05-14T08:12:00.000+10:002012-05-14T08:40:09.078+10:00LectorialI just finished a study group on <a href="https://github.com/learnhaskell-brisbane/learn/wiki">Learn You a Haskell for Great Good</a>. Which was a great experience, for many reasons, but I think the way each session was structured into a combination of lecture and tutorial deserves particular attention.<br />
<br />
The weekly structure was fairly straight forward: a <a href="https://github.com/learnhaskell-brisbane/learn/wiki/Guide-for-Chapter-Leaders">chapter leader</a> covers a chapter the week before the rest of group, writes a summary and some programming questions. The weekly sessions took about an hour and a half. This consisted of the chapter leader going through their summary allowing the group to interject with questions and answers (if the chapter leader didn't know) or there might be some furious Googling to find a good reference or answer that someone half remembered. The programming questions and answers would usually go around the table, each person would answer a question and the others would then comment on it or show their answer if it was particularly different (or shorter or whatever). The time was roughly 60/40 from lecture to programming/tutorial.<br />
<br />
Compared to university courses, where you often had two hours of lectures and then one or two hours of tutorials often spread out over a week, this arrangement seemed to be very time efficient. The other advantage was getting the students to run the study group. The chapter leader has to spend a lot more time making sure they understood the chapter in order to answer any questions that would come up during the review and to set the programming questions. For me, setting the questions and making sure you had answers (and by the end of it tests to help people along) was probably the best part of the learning experience. There was no real hiding if you hadn't done the answers either - partially because it was such a small group but also because of the high level of participation.<br />
<br />
It'd be interesting if there were university courses where you were graded not just on an examination and assignments but the questions you set and if you were able to run a small group of people through a class. It would also make tutorials more relevant which are often dropped by students.<br />
<br />
It seems "<a href="http://www.flinders.edu.au/teaching/quality/first-year-students/good-practice-at-flinders-university.cfm">lectorial</a>" also means, "large tutorial in a lecture hall to give context around information given in lectures". They also mention <a href="http://mams.rmit.edu.au/u9582m27wzeo1.pdf">small group activities and class lead presentations</a> so there is some overlap.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-3322141.post-62376476904585020882012-01-26T08:18:00.002+10:002012-01-29T08:27:01.940+10:00One Platform<div class="tr_bq">A couple of things have struck me about the iPad and iBook Author. If you want to read some background <a href="http://daringfireball.net/2012/01/ima_set_it_straight_this_watergate">John Gruber has a good summary</a>. It may well come down to whether being focused on one thing is wrong.<br />
<br />
<div class="tr_bq">Firstly, Steve Jobs <a href="http://www.the-digital-reader.com/2012/01/11/what-steve-jobs-thought-about-textbooks/">is quoted in his biography saying he'd give away textbooks</a>. This is a pretty big bargaining chip when Apple was talking to the textbook publishers: go with us or we'll give your product away for free. How does this differ from <a href="http://news.bbc.co.uk/2/hi/special_report/1998/04/98/microsoft/198390.stm">Bill Gates saying they'd cut off Netscape's oxygen supply</a>?</div><br />
The other thing it reminds me of is <a href="http://truckandbarter.com/2004/10/bill-gates-virt.html">Bill Gates' developer virtuous cycle</a>. This is where developers write applications for a particular platform, users pick the platform with the most applications which then feeds back to developers supporting that platform. In the past, developers have had to make a single choice as to which platform they wanted to support in order to succeed. It continues to happen with Android and iPhone. Jeff Raikes has given a good example in the early days of Microsoft in, "<a href="http://ecorner.stanford.edu/authorMaterialInfo.html?mid=914">The Principle of Agility</a>", he says:</div><blockquote>I suspect many of you or in the audience might if I ask you the question of, "What was Microsoft's first spreadsheet?" You might think Excel was our spreadsheet. But in fact, we had a product called Multiplan...Our strategy was to be able to have our application products run on all of those computing platforms because at that time there were literally hundreds of different personal computers.</blockquote><blockquote>And on January 20th, 1983 I realized, I think Bill Gates also realized we had the wrong strategy. Any guesses to what happened on January 20th, 1983? Lotus, it was the shipment of Lotus 1-2-3. How many personal computers did Lotus run on in January of 1983? One, and exactly one. And it was a big winner. So what we learned was when it came to customer behavior. It wasn't whether you had a product that run on all of those computing platforms. What really mattered to the customer was, did you have the best application product on the computer that they own. And Lotus 1-2-3 was the best spreadsheet. In fact, it was the beginning of a "formula for success in applications". That I defined around that time called, "To win big, you have to make the right bet on the winning platform."</blockquote><blockquote>So what's the principle? The principle is agility. If you're going to be successful as an entrepreneur what you have to do is you have to learn. You have to respond. You have to learn some more. You have to respond some more. And that kind of agility is very important. If we had stayed on our old strategy, we would not be in the applications business today. In fact, one of the great ironies of that whole episode is that in the late '80s or early '90s our competitors, WordPerfect, Lotus. What they really should have been doing was betting on Windows. But instead they were betting on and WordPerfect was the best example. Betting on, putting WordPerfect on the mainframe, on minicomputers. In fact, they went to the lowest common denominator software strategy which we switched out of in the 1983 timeframe. So, for my key principle is, make sure that you learn and respond. Show that kind of agility. </blockquote>This is echoed in one of the recent exchanges (about an hour into <a href="http://twit.tv/show/macbreak-weekly/283">MacBreak Weekly 283</a>) where Alex Lindsay talks about how important it is to him to make education interesting and how he's not going to wait for standards, he just wants to produce the best. Leo Laporte responds by saying how important it is for an open standard to prevail in order to prevent every child in America having to own an iPad in order to be educated or informed.<br />
<br />
You have to wonder if developers have reached a level of sophistication that allows them to use a cross platform solution or whether that will ever happen. I think that it's inevitable that a more open platform will succeed but I'm not sure whether multiple platforms can succeed - we shall see.<br />
<br />
If you want to here more there are many interesting conversations around including: <a href="http://5by5.tv/hypercritical/51">Hypercritical 51</a>, <a href="http://twit.tv/show/macbreak-weekly/283">MacBreak Weekly 283</a> and <a href="http://twit.tv/show/this-week-in-tech/337">This Week in Tech 337</a>.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-3322141.post-20238346747880745532012-01-12T07:30:00.003+10:002012-03-18T20:06:28.224+10:00Pretending We All Don't Know<a href="http://www.thisamericanlife.org/radio-archives/episode/454/mr-daisey-and-the-apple-factory">Some amazing writing and performance by Mike Daisey</a> (<a href="http://podcast.thisamericanlife.org/podcast/454.mp3">mp3</a>):<br />He just walked up to the Foxconn plant and wanted to see if anyone wanted to talk to him:<br /><blockquote class="tr_bq"><br />I wouldn't talk to me...she runs right over to the very first worker...and in short order we cannot keep up...the line just gets longer and longer...everyone wants to talk...it's like they were coming to work everyday thinking, "You know it'd be great? It'd be so great if somebody who uses all this crap we make, everyday all day long, it'd be so great, if one of those people came and asked us what was going on because we would have stories for them...</blockquote><br />I haven't gotten all the way through but he has a bit about talking to a girl that cleaned the glass on the assembly line:<br /><blockquote class="tr_bq"><br />You'd think someone would notice this, you know? I'm telling you that I don't know Mandarin, I don't speak Cantonese...I don't know fuck all about Chinese culture but I do know that in my first two hours on my first day at that gate I met workers who were 14 years old, 13 years old, 12. Do you really think that Apple doesn't know? In a company obsessed with the details. With the aluminium being milled just so, with the glass being fitted perfectly into the case. Do you really think it's credible that they don't know? Or are they just doing what we're all just doing, do they just see what they want to see?</blockquote><br /><p><br />It seems absolutely credible that they do know.</p><br /><b>Update 17th of March</b>: <a href="http://www.thisamericanlife.org/radio-archives/episode/460/retraction">Retracting Mr Daisey</a> (<a href="http://podcast.thisamericanlife.org/podcast/460.mp3">mp3</a>) it appears his story was more fiction than not. I took it more as performance than journalistic reporting but many claims aren't just errors they were just made up. <a href="http://www.thisamericanlife.org/blog/2012/03/retracting-mr-daisey-and-the-apple-factory">He did out and out lie</a> <a href="http://www.thisamericanlife.org/radio-archives/episode/454/transcript">when asked about child labour</a>, "Well I don't know if it's a big problem. I just know that I saw it." Which is a shame because <a href="http://www.marketplace.org/topics/business/apple-economy/apple-admits-child-labor-growing-problem-its-china-factories">verified reports of these conditions contain similar claims</a>.<br /><br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-3322141.post-78401625015208964752012-01-04T07:39:00.000+10:002012-01-04T08:02:23.083+10:00Jesus Says Share FilesA famous story, Jesus takes some fish and loaves (<a href="http://www.biblegateway.com/passage/?search=john%206:5-6:15&version=KJV">accounts</a> <a href="http://www.biblegateway.com/passage/?search=mark%208:1-8:9&version=KJV">differ</a> - although maybe he did it more than once, setting up the "Jesus's Food Multiplier" stall every second and fourth Saturday of the month) and feeds some people (again accounts differ and they don't count women and children as people - let's just skirt around that entire issue shall we). <br />
<br />
Everyone was impressed - even the disciples that were fisherman who had deep ties with the community. They didn't say, "Hey, Jesus you've just destroyed our business model, you can't go around feeding thousands of people per fish. One person, one fish - that's the way it has always been and that's the way it should always be."Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-3322141.post-83188969929350388922012-01-03T14:15:00.002+10:002012-09-20T14:47:35.024+10:00Partitioning Graphs in HadoopA recent article at Linked In called "<a href="http://engineering.linkedin.com/hadoop/recap-improving-hadoop-performance-1000x">Recap: Improving Hadoop Performance by (up to) 1000x</a>" had a section called "Drill Bit #2: graph processing" mentioning the problem of partitioning the triples of RDF graphs amongst different nodes. <br />
<br />
According to "<a href="http://www.cs.yale.edu/homes/dna/papers/sw-graph-scale.pdf">Scalable SPARQL Querying of Large RDF Graphs</a>" they use MapReduce jobs to create indexes where triples such as s,p,o and o,p',o' are on the same compute node. The idea of using MapReduce to create better indexing is not a new one - but it's good to see the same approach being used to process RDF rather than actually using MapReduce jobs to do the querying. It's similar to what I did with <a href="http://biomanta.org/publications/2008/eScience2008.pdf">RDF molecules and creating a level of granularity between graphs and nodes</a> as well as things like Nutch.Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-3322141.post-15663852101861230752011-12-31T16:59:00.000+10:002012-01-05T10:20:09.180+10:00Why the Cloud isn't the InternetI think people are just starting to realize what some of the cloud vendors are providing and their drawbacks. <a href="http://www.amazon.com/Steve-Jobs-Walter-Isaacson/dp/1451648537">Steve Jobs is quoted in his biography</a> describing the intent behind iCloud:<br />
<blockquote class="tr_bq">
We need to be the company that manages your relationship with the cloud - streams your videos and music from the cloud, stores your pictures and information, and maybe even your medical data...over the next few years, the hub is going to move from the computer into the cloud...So we wrote all these apps - iPhoto, iMove, iTunes - and tied in our devices, like the iPod and iPhone and iPad...We can provide all the syncing you need, and that way we can lock in the customer.
</blockquote>
The Mac has always been different to Windows. One of those differences Windows users' notice is that you switch between applications in OS X compared to documents (or windows) in Windows. The Apple cloud maintains that pattern by syncing between applications rather than documents (or individual files). This approach confuses a lot of people.<br />
<br />
This is different to how most Mac users currently sync their files with Dropbox. iCloud has ended up following its Mac heritage whereas Dropbox sticks to file syncing. <a href="http://www.managewithoutthem.com/blog/?p=379">Matthew writes</a>:<br />
<blockquote class="tr_bq">
The difference between Dropbox and iCloud synchronization is that Dropbox is theoretically just a file system...If you have a document that you edit on your iPad and sync with Dropbox you can edit that same file, using a different application, on your PC...The iCloud experience is completely different. The only way to edit a document across platforms or devices is to use a version of the application for each device. Not a compatible application...it may actually make me change the desktop application that I use purely based on iCloud support.</blockquote>
If you want to read more about Dropbox and Apple there's <a href="http://www.forbes.com/sites/victoriabarret/2011/10/18/dropbox-the-inside-story-of-techs-hottest-startup/print/">a really good article in Forbes which details how Steve Jobs personally made an offer to buy Dropbox</a>. <br />
<br />
The edges of iCloud - the integration points to applications and the operating system - it's incomplete even if you buy into the idea of applications over documents. <br />
<br />
For example, on iOS devices there is a Notes application but on OS X these notes are in a tab in the Mail application. This seems like a weird and non-standard place to put it - if you are going to sync by application you'd think it should be the same application across platforms. <br />
<br />
In <a href="http://support.apple.com/kb/DL1455">iCloud for Windows</a>, Windows users get more choice than OS X users. Mail, Contacts and Calendar integrate with Outlook but you can choose your application for Bookmarks (IE or Safari) and Photo Stream.<br />
<br />
Even within applications <a href="http://shapeof.com/archives/2011/12/state_of_the_meat_2011_edition.html">Apple haven't quite gotten syncing right with iCloud yet either</a> including the <a href="http://www.chrisboyd.net/2011/11/icloud-and-the-new-ios-data-storage-guidelines/">new rules around where files are stored and what is automatically removed or backed up</a>.<br />
<br />
The cloud is about vendor lock as much as any other platform, like application servers or databases, but with the extra problem that your data is tied to the vendor's application, cloud and user base. A stickier solution.<br />
<br />
Some, like Google and Facebook, offer export services, but these almost don't matter, because you get an almost useless hunk of data, lose the ability to run the applications and you can't access users on their network (who may well have been collaborators).<br />
<br />
With the Internet, the Web and open source you still have the possibility to use your data with applications shared by many people across different networks.<br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-3322141.post-58856546706502055722011-12-31T16:56:00.000+10:002011-12-31T22:30:02.092+10:00Global Code Retreat 2012I went to the local <a href="http://coderetreat.org/events/global-day-of-coderetreat-2011">Global Code Retreat held on the 3rd of December</a>. Overall, it was an amazing event - very well hosted and attended. The <a href="http://coderetreat.org/facilitating/structure-of-a-coderetreat">basic structure of the day</a> was 6 or so 45 minute sessions trying to implement something with a different person each time. At the end of the 45 minutes, no matter how far you had got, you deleted your solution.<br />
<br />
The problem was "<a href="http://coderetreat.com/gol.html">The Game of Life</a>". I'm pretty familiar with this problem having come across "Conway's Game of Life" early on in a magazine like Compute! or Byte. <br />
<br />
However, if you walked away with a really awesome solution to "The Game of Life" you probably missed the point - most of the things that were being taught <a href="http://en.wikipedia.org/wiki/Hidden_curriculum">were hidden</a>.<br />
<br />
The solution was really beside the point. One of the main reasons is to repeat solving the problem from scratch based on an idea called <a href="http://en.wikipedia.org/wiki/Kata">kata</a> (movements practiced by yourself or in pairs). This is something that I had come across in "<a href="http://pragprog.com/the-pragmatic-programmer">The Pragmatic Programmer</a>" which at the time reminded me of the time I had spent with projects at home - reimplementing the same thing over and over again.<br />
<br />
Steve Yegge mentions the same thing in his article "<a href="http://programmingpraxis.com/2009/08/11/uncle-bobs-bowling-game-kata/">Practicing Programming</a>". He mentions that even as you program in your day job you may not actually be practicing programming. Repetition in solving the same problems seems to be about keeping the problem fixed and then changing how you approach it and freeing you from any time constraints. Most programming jobs involve solving the solution once (or if you're lucky doing a proof of concept and then implementing it again).<br />
<br />
The first time around it was awful. I didn't know what I was doing, my environment was a little bit shaky, we couldn't agree on a language and I spent a lot of the time just setting it up. <br />
<br />
It made me become aware that for the first time, practically ever, my personal computer had diverged from my work computer. Not in the "normal" Windows at work, Linux and OS X at home - but what I do at home and at work have diverged to the point where I'm learning stuff in many directions and there's almost no overlap between the two.<br />
<br />
<div class="p1">
The second time was much better. There was less discussion on languages to use, how to approach the problem, how do you test drive it, whose computer to use and so on. There was still discussion but we both shared a bit more context this time which made the discussion flow. A big difference to the first time.</div>
<div class="p1">
<br /></div>
<div class="p1">
The third time around changed the format a little to where you couldn't talk to the person but you could only express requirements through tests. So this sorted out the people who were testing from those who weren't. But it also seemed to reduce the clutter around what needed to be done. Tests are much less ambiguous compared to talking through requirements and so once you setup a rhythm of tests it became much easier. Also, the whole room was very quiet. You could imagine that a team doing silent TDD and pair programming wouldn't be the noisiest group in the room (for once).</div>
<div class="p1">
<br /></div>
<div class="p1">
Each round thereafter changed the programming requirements: no loops, methods no more than 3 lines, and no if statements.</div>
<div class="p1">
<br /></div>
<div class="p1">
What did I learn? Heaps.</div>
<div class="p1">
<br /></div>
<div class="p1">
I ended up doing Ruby quite a bit and mostly the solution came out at about 30 lines of production code and 30 lines of tests and you could pretty much do it in the time allocated. I also did solutions C# and Haskell. The Haskell solution came out at about 30 lines total - both tests and production code - and met every constraint (no loops, small functions, no if statements).</div>
<div class="p1">
<br /></div>
<div class="p1">
Doing the same problem over and over again is surprisingly effective and nothing replaces sitting with a person to learn a new language or to be exposed to a variety of solutions. One of the tricks - and you find this frequently with pair programming - you have to be very good at communication - both by saying what you're doing but also getting the other person to explain themselves.</div>
<div class="p2">
<br /></div>
<div class="p2">
I also learnt:</div>
<div class="p1">
</div>
<ul>
<li>Even with something as well defined and familiar as "The Game of Life", the solutions were varied and some of the requirements (based on the rules on Wikipedia) were redundant.</li>
<li>My brain is very weak compared to how well Google search works.</li>
<li>Between each new attempt you tend to reflect on each previous solution and see the negatives and positives.</li>
<li>By continually starting a new project setup time was greatly reduced - dependencies that get in the way were slowly reduced - editors, libraries, searching the web, etc.</li>
<li>That it's good to throw code away. It frees you up by allowing you to try different approaches or learn something new (like a different language).</li>
<li>Think before you hack.</li>
<li>A functional approach seemed to be where the answers were converging - meeting all the programming constraints that were given.</li>
</ul>Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-3322141.post-49702664861013566802011-11-03T06:52:00.001+10:002011-12-31T22:30:10.737+10:00A Review of QI LiveWhen Douglas Adams visited Brisbane in 2000 (possibly 1999) I had a friend sign a copy of "Starship Titanic" - I was too busy at work to see him myself. I have always been a little disappointed that I didn't take the hour to see him. When Stephen Fry announced "QI Live" on Twitter I made sure I wasn't going to miss out. It was only in Melbourne and Perth (at the time) and bugger it if I was going to Perth, so even though I didn't live in Melbourne, I got some.<br />
<br />
As I walked into the theatre the place names of France that sound funny appeared on the screen. I remembered them from a previous episode of the TV series. It didn't matter, they'd just played the theme for "Pinky and the Brain" and were playing Tom Lehrer's "The Elements". I was in a good mood and I was happy to be there.<br />
<br />
My wife thought she saw the producer, "What John Lloyd?". Surely not, but soon after, an usher told us to stop taking photos. It was a little annoying because I was mainly taking photos of the theatre - I'd never been to the Queen Victoria Theatre before. Then a prissy voice said that Mr Fry's man servant would like you all to turn off your phones - so I did.<br />
<br />
The only bit I now remember of Stephen's opening was him telling a joke about a chicken going to a library. It was told well, I guess, but too long for me because I'd heard it before and spent most of the joke remembering my grandmother telling me it about 20 years before. Maybe that was what he was going for. What was worse, it lead to the fact that a frog in California is the only species in the world to go ribbit. Another recycled fact and I was getting a little bit annoyed.<br />
<br />
Colin Lane was the first panelist and pretty forgettable. He and Andrew played tennis with the "Nobody Knows" paddles which was okay the first time. Denise Scott didn't really seem to get the format but she had some good anecdotes - such as being recognised as looking like that person on TV but that she couldn't possibly be that person. Andrew Denton was by far and away the star - he seemingly, confused Stephen by saying that, "If nobody knows, why isn't he on the panel?" It made the whole thing almost worth it. Except that Stephen then went on to spend most of the evening calling him stupid.<br />
<br />
The first question was about koalas having fingerprints that are indistinguishable from human fingerprints and that maybe they were doing all the robbing in Australia (which they said has the highest burglary rate in the world). This fact <a href="http://www.abs.gov.au/ausstats/abs@.nsf/Lookup/by%20Subject/1370.0~2010~Chapter~International%20comparisons%20(4.4.7)">doesn't seem to be true now</a> although <a href="http://www.geoffmetcalf.com/guncontrol_20010302.html">it was true in 2001</a> which is probably closer to when it was written. <br />
<br />
Some of the other content that was from previous episodes included: kangaroos not farting and having 3 vaginas, when does the sun set (a video), Beatles' HELP album cover, the most popular song being the default Nokia ring tone and slavery not being illegal in the UK until recently. This is just me guessing but Andrew Denton knew every answer. The members of the audience shouting out certainly did. But then so did Alan and so did I. Well, I might not have known every question. Alan answered over the top of the question for 100 points so I didn't hear what it was.<br />
<br />
I might be wrong, but one of the ways the show works is that Alan doesn't know the answer to every question. Sometimes some of the guests do (John Sessions and Rory McGrath) but the point is: Alan is the kind of guy the show is supposed to be educating - he's the audience - he's coming along with us for the ride and a laugh. He probably did reflect some of the audience that night too though, he looked a bit bored and a bit uncomfortable.<br />
<br />
Given the amount of recycled content I don't think they would've done this show in the UK but apparently it's okay to do this in Australia.<br />
<br />
At the end of the show, when Stephen was awarding the points, he said either the people in the audience were smart or that they had downloaded episodes not shown in Australia from BitTorrent. This is what made me mad. All episodes had been shown on the ABC from series A (you can Google the series number and ABC to find them on iView like <a href="http://www.abc.net.au/iview/?series=3145597">Series 4</a>). And some of the facts were also available in the QI books (like <a href="http://en.wikipedia.org/wiki/The_Book_of_Animal_Ignorance">Animal Ignorance</a>). Stephen seemed to be saying that we (whoever we are - the producers of the show?) reserve the right to charge you for content that we don't think you should've seen and we're a bit surprised you have because you've really cheated yourselves out of $200 a seat.<br />
<br />
I'm still mad. QI has a <a href="http://qi.com/about/philosophy.php">philosophy</a> about facts and curiosity. Recycling facts, that are now wrong, goes against that philosophy. The original idea for the show seems to have been going against accepting common knowledge and laziness. The show seemed to scream, "Look it up!" or "Why do you think that?". Recycling content was pure laziness. Stephen and Alan knew that the content was recycled and it ruined the show or at least the idea behind the show.<br />
<br />
In the end, it's hard to know if you should regret the things you don't do versus the things you do.<br />
<br />
As an aside, some interesting facts (sticking with the Australian theme): all living marsupials are from South America not Australia (<a href="http://en.wikipedia.org/wiki/Marsupial">from Wikipedia</a>) and <a href="http://twitter.com/#!/adzebill/status/126781616528957440">echidnas and platypus can make custard</a> (as they produce both eggs and milk).Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-3322141.post-12848676771440007922011-11-02T09:10:00.003+10:002011-12-31T22:38:50.770+10:00Building a Network Over Transactions<div class="separator" style="clear: both; text-align: left;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj4lQFELmOr0sHixUwpiZuVAtrhJ1N6mA4FSxSWJJxaSs3Qp_adU95Il9yQ9HotY0yOoaXfqFkq2BTwrBUu_Tz-G417T5FTfAu6Ya-BQ2T0Jk1IrgMgjZE_wL3PqdrsrHnw86UA/s1600/google-business-plan.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="155" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj4lQFELmOr0sHixUwpiZuVAtrhJ1N6mA4FSxSWJJxaSs3Qp_adU95Il9yQ9HotY0yOoaXfqFkq2BTwrBUu_Tz-G417T5FTfAu6Ya-BQ2T0Jk1IrgMgjZE_wL3PqdrsrHnw86UA/s200/google-business-plan.png" width="200" /></a></div>
<a href="http://www.tbrc.fi/pubfilet/JBS_Pynn%F6nen%20et%20al%202011.pdf">The new meaning of customer value:
a systemic perspective</a> analyses providing value to customers from a systems perspective.<br />
<br />
I had a thought, a little while ago, that Google is probably one of the first companies where the users and content providers are basically the same people and that they make money, through adverts, between connecting these two together using search.<br />
<br />
It's probably not a new thought but at the time I started to draw a diagram of how it all works. I happened across this diagram (on the left) in this paper (page 4). <br />
<br />
A perverse example is when you search for something and the first hit is your own blog. You're now both the producer and consumer of the same content - with adverts sandwiched in the middle - hmm value sandwich.<br />
<br />
In the paper they use Google and Apple as examples:
<br />
<blockquote>
"Google has indeed realized the usability of systemic value-creation principles in building its offering. In contrast to Apple, it uses the value network to generate the revenues. Google provides free, easy-to-use tools for customers to use on the internet, the aim being to generate “eyeballs” for the ads of the advertising customers. In collecting these “eye balls” it has or it creates a product for every internet activity that attracts lots of traffic. From the firm's perspective, the offering elements are integrated to provide the audience for the ads, information being gathered in order to better scope the ads or just to make the customers happy and to promote
other products."</blockquote>
There's an old idea, for the Web anyway, of building a network of customers above extracting value out of each transaction. Over-valuing the creation of the network lead to the whole dot com bubble and I have been thinking about how business models have progressed since then.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-3322141.post-63255349642404286642011-09-05T13:09:00.004+10:002011-09-30T05:42:42.157+10:00One horizontal and one vertical monitor for Ubuntu 11.04I recently had problems configuring Ubuntu with dual screens using an NVidia card - the first screen is horizontal and the second screen is vertical - they are both Dell U2711 monitors. The idea is to be able to have Firebug running on the screen without obscuring the web page.<br />
<br />
This should be a simple thing. But from what I can tell NVidia's TwinView driver doesn't support different monitor rotations (under Linux). But having X Windows it should still be easy using dual X Screens. It should just be a matter of going to NVidia X Server Settings, selecting "Separate X Screen" and then selecting "Enable Xinerama". Unfortunately for me, this caused general weirdness where the first screen was mostly black and the second screen was displayed horizontally.<br />
<br />
The way I fixed the problem was to disable Compiz. The <a href="http://askubuntu.com/questions/32447/how-do-i-disable-compiz-in-the-ubuntu-classic-session">easiest way I found to disable Compiz</a> was to log in using the "Ubuntu Classic (No effects)" session.<br />
<br />
Then it was just a matter of enabling multiple XServers and Xinerama and enabling rotation (RandRRotation). Here are the bits in my xorg.conf to rotate my second monitor to the left:<br />
<br />
<pre>Section "Monitor"
# HorizSync source: edid, VertRefresh source: edid
Identifier "Monitor1"
VendorName "Unknown"
ModelName "DELL U2711"
HorizSync 29.0 - 113.0
VertRefresh 49.0 - 86.0
Option "RandRRotation" "on"
Option "DPMS"
EndSection</pre>
<pre>Section "Screen"
Identifier "Screen1"
Device "Device1"
Monitor "Monitor1"
DefaultDepth 24
Option "TwinView" "0"
Option "metamodes" "DFP-2: nvidia-auto-select +0+0"
Option "Rotate" "left"
SubSection "Display"
Depth 24
EndSubSection
EndSection</pre>
Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-3322141.post-26105564117682368602011-08-08T14:20:00.007+10:002011-09-26T20:10:27.732+10:00When Kworkers Don'tI recently had the problem where kworker threads were taking up 100% on my Ubuntu box.<br /><br />Various threads seem to suggest the problem lies with <a href="http://en.wikipedia.org/wiki/Advanced_Programmable_Interrupt_Controller">interrupts around PCI</a> or <a href="http://en.wikipedia.org/wiki/Advanced_Configuration_and_Power_Interface">power saving features</a>. The thread with the answer that worked for me, "<a href="http://ubuntuforums.org/showthread.php?t=1805678&page=2">HELP !!! Zombie attack ... (kworker)</a>"<br /><br />To turn both of them off use:<code><br />noapic acpi=off</code><br /><br />I found I only needed to turn off acpi though:<code><br />acpi=off</code><br /><br />Put that in your grub configuration (sudo vi /etc/default/grub) and restart.<br /><br />I hate not knowing why though.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-3322141.post-7104505034568636392011-05-08T09:14:00.006+10:002011-09-20T09:48:58.105+10:00End of JRDFMany things have changed since I started JRDF in 2003. It feels like JRDF has come to a natural conclusion.<br /><br />Some of them are things I've failed to do very well: get contributors, implement different file format parsers, find enough time to refactor existing bits, etc. I'm also not that interested in Java anymore (as was becoming increasingly obvious as it did have Scala in there at one point and has some Groovy DSL code in there).<br /><br />The most recent change I've seen is that JSON has achieved some of what RDF was trying to do and I see it more and more in the way people use it to expose their data in a RESTful way. The tooling is less onerous and the ease of use is higher even if what you get is much less.<br /><br />Also, external factors like <a href="http://www.w3.org/2010/02/rdfa/sources/rdf-api/">W3C's official RDF API</a> (for Java and Javascript) is largely the same thing but with official backing.<br /><br />I've enjoyed developing it and meeting and talking to other people in other groups (especially Jena and Sesame). And of course, none of this would've happened if it wasn't for a lot of other people: Paul Gearon, Simon Raboczi, David Wood, David Makepeace, Tom Adams, Yuan Fang-Li, Robert Turner, Brad Clow, Guido Governatori, Jane Hunter, Imran Khan and Abdul Alabri and the other guys and girls Tucana/Plugged In Software/UQ.<br /><br /><br /><br />Unknownnoreply@blogger.com0