Sunday, November 17, 2013

Make "Enter" in Twitter Typeahead Select the First Item

This is just a quick post which may not be applicable for long but fixed the problem I had where I was trying to get the first item even if you hadn't selected it with the mouse or cursor.

$('input.typeahead').keypress(function (e) {
  if (e.which == 13) {
    var selectedValue = $('input.typeahead').data().ttView.dropdownView.getFirstSuggestion().datum.id;
    $("#input_id").val(selectedValue); $('form').submit(); return true;
  }
});

I appended the same information to this GitHub issue.

Friday, August 23, 2013

Grace Hopper on Programmers and Programming

I've started to read "Show Stopper!" and it has an excellent part in the first chapter about Grace Hopper, who created the first compiler that basically created the jobs that modern programmers perform:
"Hopper was convinced that overcoming the difficulties posed by proliferating computer languages would rank among the greatest technical challenges of the future. "To me programming is more than an important practical art," she said in a lecture in 1961. "It is also a gigantic undertaking in the foundations of knowledge." Ironically, she fretted that the greatest barrier to progress might come from programmers themselves. Like converts to a new religion, they often displayed a destructive closed-mindedness bordering on zealotry. "Programmers are a very curious group," she observed. 
They arose very quickly, became a profession very rapidly, and were all too soon infected with a certain amount of resistance to change. The very programmers whom I have heard almost castigate a customer because he would not change his system of doing business are the same people who at times walk into my office and say, "But we have always done it this way." It is for this reason that I now have a counterclockwise clock hanging in my office."
I would love to know what the name of the lecture was and if there were any transcripts or copies of it around.

Monday, July 15, 2013

Copying between two uploaders in CarrierWave

To copy between two cloud providers using CarrierWave and Fog is a bit tricky.  Copying from one provider to a temporary file and then storing it in the other seems to work but the problem is that the file name is not preserved.  If you wrap the temporary file in a SanitizedFile then Carrierwave will update the content without changing the name of the file.

The following code preserves the file name between providers (where "obj" is the model, "source" is one uploader and "destination" is the other):

def copy_between_clouds(obj, src, dest)
  tmp = File.new("/tmp/tmp", "wb")
  begin
    filename = src.file.url
    File.open(tmp, "wb") do |file| 
      file << open(filename).read
    end
    t = File.new(tmp)
    sf = CarrierWave::SanitizedFile.new(t)
    dest.file.store(sf)
  ensure
    File.delete(tmp)
  end
end
To use it:
copy_between_clouds(o, o.old_jpg, o.new_jpg)
You might need to change "src.file.url" to "src.file.public_url" for some cloud providers.

Friday, April 26, 2013

Elliott, Dina and Steve

I was reading "'Memo' Functions and Machine Learning" again.  It's an interesting article, appearing in Nature, before an article about mammalian reproduction, and uses balancing a pole on a trolley as an example of artificial intelligence.

In the paper, the trolley is controlled in real-time by two computers: a PDP-7 and an Elliott 4100.  I hadn't heard of the 4100 before but the Elliott and others come from the start of the British computing industry - including others you may never have heard of like LEO and English Electric Computers. You can read more about them in "Early Computer Production at Elliotts" and "Moving Targets - Elliott Automation and the Dawn of the Computer Age in Britain 1947-1976" (review of the book).

One of the pictures in the Elliott computer archives has the caption, "Switching on the Elliott 405 at Norwich City Council in 1957. The woman to the right is Dina Vaughan (later Dina St Johnston), who did the initial programming for the Norwich system." In 1959, she was the first person to start a UK software house.  This being a company that only wrote software - not software that came bundled with hardware.  The first, according to Wikipedia, was Computer Usage Company in 1955.

The best resource on her I could find was in "The Computer Journal" called "An Appreciation of Dina St Johnston (1930–2007)".  It describes how she was writing software in the mid-50s making her a contemporary of people like Michie, Turing, von Neumann and Godel.  It describes what programming was like then:
"She wrote with a Parker 51 fountain pen with permanent black ink and if there ever was a mistake it had to be corrected with a razor blade. Whereas the rest of us tested programs to find the faults, she tested them to demonstrate that they worked."
One of the first commercial jobs for the company was a control system for the first industrial nuclear power plant.  Her company, Vaughan Programming Services, was visited on the 10th anniversary of the British software industry by "Electronic Weekly":
"The staff in a company run by a woman might be expected to contain a high proportion of women, and this expectation is fulfilled", runs the EW report, "but, unexpectedly, a low proportion of the professionals employed have degrees, and there is no great emphasis on strong mathematical background in the mix of skills used."
The industry norms don't seem to have changed very much.  More details can be found on Google Books by searching, "Dina Vaughan" (or St Johnston).

In "Recoding Gender", Dina St Johnston is mentioned along with another female programming pioneer, Dame Stephanie Shirley.  A refugee of World War II, she entered the software industry as a "late pioneer".  She became interested in programming and got into the computer room by sweeping up chads, "I could not believe that I could be payed so much for something I enjoyed so much...early software was fascinating...it was so engrossing."  In 1962 she started "Freelance Programmers", the second software company founded by a woman in the UK.   Her view of the computing industry seems to be one that offered a way to address social and economic problems, "a crusade", a flexible workplace with policies designed to support women with dependents.  Originally designed to help women with children to continue to work, its charter gradually became more broad to include supporting women's careers, then for women with any dependent and in 1975 was expanded, by law, to include men.  The final mission became "people with dependents who couldn't work in the conventional environment".  She says in her biography, the company had always employed some men and at the time of the passing of the equal opportunities law three of the 300 programmers and a third of the 40 systems analysts were male.

A Guardian article, written in 1964, quoted in "Dinosaur and Co", about Shirley and the early IT industry:
"The main qualification is personality...Much of the work is tedious, requiring great attention to detail, and this is where women usually score...Mrs Steve Shirley...has found in computer programming an outlet for her artistic talents in the working out of logical patterns.
Now retired with a young baby, she has found that computer programming, since it needs only a desk, a head and paper and pencil, is a job that can be done from home between feeding the baby and washing the nappies.  She is hoping to interest other retired programmers in joining her work on a freelance basis."
The difficulties in starting a software company in the 1950s and 1960s seem immense.  There was the idea that you couldn't sell software, that it didn't have any value as a product or a service by itself, as customers expected it to be free with the hardware.  Then there is the inequality and sexism.  She called herself "Steve" as no one responded to her business letters when she used "Stephanie".  Banks also required written permission from a man so that a woman could open a bank account.  Furthermore, almost all companies and the public service required or expected women to leave their job when they married or had their first child, so you "retired with a young baby".  One of the few ways women could continue to work was to start their own company.

She mentions her title was for "services to the industry" and as any good programmer does, she defines Dames: "...recursively by saying, a Knight is a male Dame".  She recently released a biography called "Let IT Go" which includes many personal struggles as well as parts that are a more practical, British version of "Lean In".

You can listen to her talk in "The Life Story of a Pioneer: From Hi-tech to Philanthropy" (the subject of about IT and running a software company begins about 12 minutes in, the second half of the talk is dedicated to her philanthropy mostly for autism).  There's also an earlier recorded video of that talk and others on her University of Oxford page.

The early British IT industry wasn't only about commercialising military projects or solving hardware and software problems but it was a way of affecting social change - to allow more people to work more flexibly.

Thursday, January 31, 2013

Removing Large Files from Git

When I've used git I've used it pretty much CVS, SVN and any other version control system I've used before - I've checked in binary files.  Whether that's dlls or jars or gems I've checked them in.  Pretty much everywhere I've worked people have said this is a problem and I've tended to argue it solves a lot of problems - one of the main ones is when where the repositories and management software along with that fails - I still have a working system from a checkout/clone/etc.

The price of this is that sometimes you need to cleanup old binary files.  Git makes this complicated but once you've found a couple of tools then it's relatively straightforward.

Stackoverflow has a Perl and Ruby script that wraps around a few git commands to list all files in a repository that's above a certain file size "Find files in git repo over x megabytes, that don't exist in HEAD".  The main gist of it is (in Ruby):

IO.popen("git rev-list #{head}", 'r') do |rev_list|
  rev_list.each_line do |commit|
    commit.chomp!
    for object in `git ls-tree -zrl #{commit}`.split("\0")
      bits, type, sha, size, path = object.split(/\s+/, 5)
      size = size.to_i
      big_files[sha] = [path, size, commit] if size >= treshold
    end
  end
end

big_files.each do |sha, (path, size, commit)|
  where = `git show -s #{commit} --format='%h: %cr'`.chomp
  puts "%4.1fM\t%s\t(%s)" % [size.to_f / Megabyte, path, where]
end

Then to remove the old files from the repository:

git filter-branch --force --index-filter 'git rm --cached --ignore-unmatch [full file path]' -- --all
git push --force


Then to cleanup any used space in the git repository:


rm -rf .git/refs/original/
rm -rf .git/logs/
git reflog expire --expire=now --all
git gc --aggressive --prune=now

Saturday, December 29, 2012

Transparent Salaries

The stereotype is that developers are notoriously bad at human interactions.  I'd suggest that developers are notoriously bad at interactions that they see as fake.  Things like small talk and negotiations.  In a developers mind or to be honest mine at least, the ability to get paid well or to pay less than retail for a product shouldn't be based on your ability to pretend your friendly with someone you're not, it should be based on an some sort of system.  Why not create a self consistent system over relying on interacting with people?

With this in mind I decided to try to create a transparent system at work to handle salaries.  The problems I see with the way traditional salary is handled, especially the lack of transparency, include:
  • It combines performance with remuneration,
  • Programmers are notoriously bad at valuing themselves, communicating it with others and ensuring that they are adequately paid during a job interview or while employed,
  • It prevents an objective assessment of what your roles and responsibilities are in the organisation,
  • It lacks an acknowledgement of what you skills are worth in the job market,
  • It creates two groups: management and developers.  This allows a combative attitude to be created and is used to justify why developers shouldn't trust business people and management,
  • People tend to find out anyway.
Some of these points I'll admit are difficult to solve whether it's a transparent system or not.  However, the last two points, which I think are especially toxic, can be solved with a transparent system.  In a closed salary system, people are encouraged to secretly find out what other people are worth and to provoke comparisons between each other.  The time periods are often long and the information often incorrect.  If a system is transparent you can solve that problem by making the information accurate and positive.

People tend to ask, "Why does Mary earn more than me?" I think I'm a better programmer/analyst/whatever than she is.  Was it just because Mary started when the company had more money?

Joel Spolsky is probably one of the key influences that I've seen on having transparent salaries.  For example, "Why I Never Let Employees Negotiate a Raise":
"...we knew that we wanted to create a pay scale that was objective and transparent. As I researched different systems, I found that a lot of employers tried to strike a balance between having a formulaic salary scale and one that was looser by setting a series of salary "ranges" for employees at every level of the organization. But this felt unfair to me. I wanted Fog Creek to have a salary scale that was as objective as possible. A manager would have absolutely no leeway when it came to setting a salary. And there would be only one salary per level."
The Fog Creek Ladder is based on the idea of achieving a certain level of capability.  The problem I had with the Fog Creek solution was that it seemed to suggest, especially in the skills level, that a programmer starts off needing help and working together and then slowly achieves the abilities to work by themselves.  Whereas, where I work we wanted to do the opposite - as you get better at programming you get better at being able to explain, to listen and work with others.  I think this is especially important if you want to work in an environment with living architecture.

So the inputs are simply what do you do - this should be objective and easy to see (again we're assuming a more transparent work environment where work is checked in or on a Wiki - if it's not shared you haven't done it).  It's assumed that you perform well - if you're not, you're not doing your job.  You can argue your role and performance separately to salary as these are assumed correct coming in.

The other input to this is local salary.  As Joel has said, if salaries rise quickly or fall sharply then the employees' salary should too.

With this is mind there were three factors we used to calculate salary:
  1. Experience (4 bands - 0-2 rating),
  2. Scope of Responsibility (0-5 rating) and
  3. Skill Set (0-5 rating).
Experience has the least weight and is geared heavily towards moving from graduate to intermediate (three bands over 5 years) and maxing out after 15 years.  

The scope of your responsibilities starts with the ability to make small technical decisions, to libraries used, and finally to cross product decisions.  This doesn't mean that we have architect roles though, it means that if you are making these decisions that's what you get paid, not the other way around.

Skill set is pretty much technical abilities with an emphasis on the ability to break work up into different levels of tasks.  Being able to create tasks from features, feature from iterations, iterations from epics, and charting a course across product cycles and customers.

The next part is how to we find an objective measure of salaries to match the levels we've created.  I found a Queensland salary guide:
Software Junior Intermediate Senior
Analyst Programmer - J2EE $60,000 $90,000 $110,000
Analyst Programmer - MS.Net $60,000 $90,000 $120,000
Analyst Programmer - Other $60,000 $85,000 $110,000
Applications / Solutions Architect $100,000 $140,000 $180,000
Team Leader - J2EE $90,000 $108,000 $117,000
Team Leader - MS.Net $85,500 $100,000 $122,000
Team Leader - Other $81,000 $90,000 $99,000

The main problem with these guides is the lack of acknowledgement of cross functional abilities.  They tend to break out employees by traditional titles like: system administrator, database administrator, support roles, architect and programming.  These are all roles that I expect everyone to be able to do.  We picked the highest programmer category (MS.Net) but you could argue that it would be higher based on ability to handle iterations, customers and architecture (so maybe between $60,000 and $180,000).

Our version of Joel's ladder:

Experience Average of Scope and Skills
0 1 2 3 4 5
Graduate 0 1 2 3 4 5
Junior 1 2 3 4 5 6
Intermediate 1.5 2.5 3.5 4.5 5.5 6.5
Senior 2 3 4 5 6 7

The maximum score is 7 with the base values starting from your experience (0-2).

So our "developer" salary was:
Graduate Junior
Intermediate
Senior
$40,000 $60,000 $90,000 $120,000

With each point (from the previous table) being weighted at $11,400 ($80,000 difference / 7 points) which means that if the points came out to a non-whole number you can calculate between those grades - a 6.3 would be $111,820 ($40,000 + 6.3 * $11,400).  What might be a bit confusing is that $40,000 is really the minimum and $120,000 is the maximum.

Overall I think this is a better system than negotiating up front and then at regular intervals (usually before or after a project).  It reeks of an up front heavy process.  I wonder if it really needs to be?  Salary seems to be one of the last things that isn't considered a continuous process - like most things in software development now are.  You turn salary into another feedback process by making it transparent.  Could you turn salary into an iterative process?  Could you iterate on it more quickly than yearly, to possibly monthly or weekly?

While the inputs are supposed to be objective you can't say this process is value free.  We've made choices over what we think is more important.  As with many of these processes getting agreement maybe harder than setting up the initial process.  This might as hard as trying to retroactively apply a coding standard.

The only negative I can think of is if you're a person (especially in business) that believes that everything is a negotiation and don't leave anything on the table.  This is where I think the developer vs business idea comes in.  I think it's an overall cultural negative - especially if these are the same people who are creating customer contracts and the like.  As a developer you want to work with your customers and business people.

Update: "Psst...This Is What Your Co-Worker Is Paid":
Little privacy remains in most offices, and as work becomes more collaborative, a move toward greater openness may be inevitable, even for larger firms...But open management can be expensive and time consuming: If any worker's pay is out of line with his or her peers, the firm should be ready to even things up or explain why it's so...And because workers can see information normally kept under wraps, they may weigh in on decisions, which can slow things down, company executives say. 
Once employees have access to more information, however, they can feel more motivated.

Tuesday, October 02, 2012

Imagination Amplifier

This is a reproduction of an article that appears in Compute's Gazzette, Issue 59.  I'm reproducing it here because I think it's particularly good and on archive.org appears in formats where it's unlikely to be found again. Alternative link to his November's COMPUTE! article.

Worlds Of Wonder - WOW!

In this month's mailbag I received a letter from Art Oswald of Goshen, Indiana. Art was responding to my article in the November COMPUTE! magazine about computers of the future. He wrote: "In the future, the phrase 'I wonder' will become obsolete. I won't have to wonder what would happen if, or wonder what something was like, or wonder how something might be. I would just ask my computer, and it would simulate by means of holographic projection anything my imagination could come up with."

Now, I ask you, Art, is this something to look forward to or something to dread?

I have a new science-fiction book coming out which deals with this subject — the effect of computers (and electronic media, in general) on the human imagination. The book is Robot Odyssey I: Escape from Robotropolis (Tor Books, April 1988). Listen to two teenage boys carrying on a conversation in the year 2014:
We think plenty using computers, but we don't imagine. We don't have to imagine what the fourth dimension is, or what will happen if we combine two chemicals, or what the dark side of the moon looks like. The computer is there a step ahead of our imagination with its fantastic graphics, cartoons, and music. We no longer imagine because the computer can do our imagining for us. 
"So why imagine?" Les said. "My pop says most people's imaginations are vague and fuzzy anyway. If the computer imagines stuff for them, it'll probably be a big improvement.
Les is right. If the computer "imagines" something, it is usually based on a database of facts, the vision of an artist, or a scientific model created by experts. How could our puny imaginations compete with images that are this inspired, detailed, and exact?

Frontiers Of Knowledge 

Science-fiction writers think a lot about new worlds of wonder. It is the human desire to "go boldly where no man has gone before" that is among our more noble impulses. It may even be the "engine" that drives us to innovate, invent, and take risks. Without this engine, we might sink into a kind of emotional and intellectual swamp. Life could become extremely boring. Every time we contemplated a decision, we would first ask our computer, "What if?" and see what the consequences might be. Knowing too much might even paralyze us and cool our risk-taking ardor.

Imagination Amplifiers

Art writes that the phrase I wonder may be rendered obsolete by computers, but I'm not certain that he's right. Instead, I think that we could use computers to stimulate our imagination and make us wonder about things even more.

Where does our imagination come from? I picture the imagination as a LegoTM set of memory blocks stuffed into the toy chest of our mind. When we imagine something, we are quickly and intuitively building a tiny picture inside our heads out of those blocks. The blocks are made up of images, tastes, smells, touches, emotions, and so on — all sorts of things that we've experienced and then tucked away in a corner of our minds. The quality of what we imagine depends on three things: how often we imagine, the quantity and diversity of blocks that we have to choose from, and our ability to combine the blocks in original — and piercingly true — ways.

Most of us have "pop" imaginations created from images supplied to us by pop culture. We read popular books, see popular movies, watch the same sitcoms and commercials, and read the same news stories in our newspapers. It's no wonder that much of what we imagine is made up of prefab structures derived, second hand, from society's small group of master "imagineers." Electronic media has made it possible for these imagineers to distribute their imaginations in irresistible packages. If you have any doubt, ask an elementary school teacher. Her students come to school singing jingles from commercials and write "original" compositions which really are thinly disguised copies of toy ads, movies, and Saturday morning cartoons.

Where does the computer fit into this picture? It could be our biggest defense against the imagination monopoly which the dispensers of pop culture now have. If we can tell the computer "I wonder" or ask it "What if?" it will work with us to build compelling images of what we imagine. If the process is interactive, and we can imagine in rough drafts, then we can polish, ornament, and rework our images as easily as a child working with sand on a beach. Then maybe the images inside our heads will be from imagination experiments that we do with our computers and not stale, leftover images pulled from the refrigerator of pop culture.

Fred D'Ignazio, Contributing Editor

Thursday, September 20, 2012

Not Much Better

I've been reading, "A Question of Truth" which is primarily about homosexuality in the Catholic church and references to it in the Bible. It has a lengthy, careful but very easily read introduction, explaining many things to do with currently held views, the difference between the acts from intents, the damage it does to people and carefully describing the different aspects of sexuality, separating all the issues well and does a reasonably good job of describing the difference between intensional and extensional usage.

A lot of this is Bible study 101 - the modern ideas like love, homosexuality, marriage, property, slavery, and so on have moved or did not exist when the Bible was written, so what people often read into it is not the original intent - not that I would say that the original intent is much better - and that's the real problem.

The book effectively reasons around all the major passages that people use to treat gay people badly. However, in the course of the reasoning, it just seems to move away from from treating homosexuality as sinful to refining women's historical position in society.

For example, the infamous Leviticus and men not lying with men passage is reasoned to mean not the act that is wrong but that a man shouldn't treat a man like a woman. Another is the story of Lot and our friends the Sodomites, which again is about offering up your daughters for hospitality reasons and the suggestion is that Sodom was destroyed because they humiliated them not because of any need for gay love.

There's a sentence or two along the lines that no modern Christian would treat women in this way (or have slaves?) which I thought rather undermines the whole point of the exercise to me.

Friday, May 18, 2012

Constructivism - Why You Should Code

I think this article on why you shouldn't code is wrong. It's wrong in a way that I was wrong in high school that I would never need to know German, art or biology. It's wrong in the way I was wrong about never needing to know set theory or relational theory or category theory. But it's also wrong in the ways I will never really know, "Computer As Condom":
Debbie hated math and resisted everything to do with it. Tested at the bottom of the scale. She learned to program the computer because this let her play with words and poetry, which she loved. Once she could write programs she found a way to tie fractions into words and poetry. Writing witty programs about fractions led her to allow herself to think about these previously horrible things. And to her surprise as much as anyone's her score on a fractions test jumped into the upper part of the scale.
What you do as a job programming in C#, Java, JavaScript or whatever has very little to do with the way people use coding to learn about learning. That's the most disappointing thing about the article. It is the terrible idea that learning how to code lessens the world if you do it wrong. Learn to code backwards in time in Latin in Perl but don't listen to anyone who says you shouldn't code.

Monday, May 14, 2012

Lectorial

I just finished a study group on Learn You a Haskell for Great Good.  Which was a great experience, for many reasons, but I think the way each session was structured into a combination of lecture and tutorial deserves particular attention.

The weekly structure was fairly straight forward: a chapter leader covers a chapter the week before the rest of group, writes a summary and some programming questions.  The weekly sessions took about an hour and a half.  This consisted of the chapter leader going through their summary allowing the group to interject with questions and answers (if the chapter leader didn't know) or there might be some furious Googling to find a good reference or answer that someone half remembered.  The programming questions and answers would usually go around the table, each person would answer a question and the others would then comment on it or show their answer if it was particularly different (or shorter or whatever).  The time was roughly 60/40 from lecture to programming/tutorial.

Compared to university courses, where you often had two hours of lectures and then one or two hours of tutorials often spread out over a week, this arrangement seemed to be very time efficient.  The other advantage was getting the students to run the study group.   The chapter leader has to spend a lot more time making sure they understood the chapter in order to answer any questions that would come up during the review and to set the programming questions.  For me, setting the questions and making sure you had answers (and by the end of it tests to help people along) was probably the best part of the learning experience.  There was no real hiding if you hadn't done the answers either - partially because it was such a small group but also because of the high level of participation.

It'd be interesting if there were university courses where you were graded not just on an examination and assignments but the questions you set and if you were able to run a small group of people through a class.  It would also make tutorials more relevant which are often dropped by students.

It seems "lectorial" also means, "large tutorial in a lecture hall to give context around information given in lectures".  They also mention small group activities and class lead presentations so there is some overlap.

Thursday, January 26, 2012

One Platform

A couple of things have struck me about the iPad and iBook Author.  If you want to read some background John Gruber has a good summary.  It may well come down to whether being focused on one thing is wrong.

Firstly, Steve Jobs is quoted in his biography saying he'd give away textbooks.  This is a pretty big bargaining chip when Apple was talking to the textbook publishers: go with us or we'll give your product away for free.  How does this differ from Bill Gates saying they'd cut off Netscape's oxygen supply?

The other thing it reminds me of is Bill Gates' developer virtuous cycle.  This is where developers write applications for a particular platform, users pick the platform with the most applications which then feeds back to developers supporting that platform.  In the past, developers have had to make a single choice as to which platform they wanted to support in order to succeed.  It continues to happen with Android and iPhone.  Jeff Raikes has given a good example in the early days of Microsoft in, "The Principle of Agility", he says:
I suspect many of you or in the audience might if I ask you the question of, "What was Microsoft's first spreadsheet?" You might think Excel was our spreadsheet. But in fact, we had a product called Multiplan...Our strategy was to be able to have our application products run on all of those computing platforms because at that time there were literally hundreds of different personal computers.
And on January 20th, 1983 I realized, I think Bill Gates also realized we had the wrong strategy. Any guesses to what happened on January 20th, 1983? Lotus, it was the shipment of Lotus 1-2-3. How many personal computers did Lotus run on in January of 1983? One, and exactly one. And it was a big winner. So what we learned was when it came to customer behavior. It wasn't whether you had a product that run on all of those computing platforms. What really mattered to the customer was, did you have the best application product on the computer that they own. And Lotus 1-2-3 was the best spreadsheet. In fact, it was the beginning of a "formula for success in applications". That I defined around that time called, "To win big, you have to make the right bet on the winning platform."
So what's the principle? The principle is agility. If you're going to be successful as an entrepreneur what you have to do is you have to learn. You have to respond. You have to learn some more. You have to respond some more. And that kind of agility is very important. If we had stayed on our old strategy, we would not be in the applications business today. In fact, one of the great ironies of that whole episode is that in the late '80s or early '90s our competitors, WordPerfect, Lotus. What they really should have been doing was betting on Windows. But instead they were betting on and WordPerfect was the best example. Betting on, putting WordPerfect on the mainframe, on minicomputers. In fact, they went to the lowest common denominator software strategy which we switched out of in the 1983 timeframe. So, for my key principle is, make sure that you learn and respond. Show that kind of agility. 
This is echoed in one of the recent exchanges (about an hour into MacBreak Weekly 283) where Alex Lindsay talks about how important it is to him to make education interesting and how he's not going to wait for standards, he just wants to produce the best.  Leo Laporte responds by saying how important it is for an open standard to prevail in order to prevent every child in America having to own an iPad in order to be educated or informed.

You have to wonder if developers have reached a level of sophistication that allows them to use a cross platform solution or whether that will ever happen.  I think that it's inevitable that a more open platform will succeed but I'm not sure whether multiple platforms can succeed - we shall see.

If you want to here more there are many interesting conversations around including: Hypercritical 51MacBreak Weekly 283 and This Week in Tech 337.

Thursday, January 12, 2012

Pretending We All Don't Know

Some amazing writing and performance by Mike Daisey (mp3):
He just walked up to the Foxconn plant and wanted to see if anyone wanted to talk to him:

I wouldn't talk to me...she runs right over to the very first worker...and in short order we cannot keep up...the line just gets longer and longer...everyone wants to talk...it's like they were coming to work everyday thinking, "You know it'd be great?  It'd be so great if somebody who uses all this crap we make, everyday all day long, it'd be so great, if one of those people came and asked us what was going on because we would have stories for them...

I haven't gotten all the way through but he has a bit about talking to a girl that cleaned the glass on the assembly line:

You'd think someone would notice this, you know?  I'm telling you that I don't know Mandarin, I don't speak Cantonese...I don't know fuck all about Chinese culture but I do know that in my first two hours on my first day at that gate I met workers who were 14 years old, 13 years old, 12.  Do you really think that Apple doesn't know?  In a company obsessed with the details.  With the aluminium being milled just so, with the glass being fitted perfectly into the case.  Do you really think it's credible that they don't know?  Or are they just doing what we're all just doing, do they just see what they want to see?


It seems absolutely credible that they do know.


Update 17th of March: Retracting Mr Daisey (mp3) it appears his story was more fiction than not.  I took it more as performance than journalistic reporting but many claims aren't just errors they were just made up.  He did out and out lie when asked about child labour, "Well I don't know if it's a big problem. I just know that I saw it."  Which is a shame because verified reports of these conditions contain similar claims.

Wednesday, January 04, 2012

Jesus Says Share Files

A famous story, Jesus takes some fish and loaves (accounts differ - although maybe he did it more than once, setting up the "Jesus's Food Multiplier" stall every second and fourth Saturday of the month) and feeds some people (again accounts differ and they don't count women and children as people - let's just skirt around that entire issue shall we).

Everyone was impressed - even the disciples that were fisherman who had deep ties with the community.  They didn't say, "Hey, Jesus you've just destroyed our business model, you can't go around feeding thousands of people per fish.  One person, one fish - that's the way it has always been and that's the way it should always be."

Tuesday, January 03, 2012

Partitioning Graphs in Hadoop

A recent article at Linked In called "Recap: Improving Hadoop Performance by (up to) 1000x" had a section called "Drill Bit #2: graph processing" mentioning the problem of partitioning the triples of RDF graphs amongst different nodes.

According to "Scalable SPARQL Querying of Large RDF Graphs" they use MapReduce jobs to create indexes where triples such as s,p,o and o,p',o' are on the same compute node.  The idea of using MapReduce to create better indexing is not a new one - but it's good to see the same approach being used to process RDF rather than actually using MapReduce jobs to do the querying.  It's similar to what I did with RDF molecules and creating a level of granularity between graphs and nodes as well as things like Nutch.