Sunday, April 17, 2016

Slavery is the New Bacon

“We can barely decide whether or not bacon will cause health problems year over year, let alone the more complicated issues like politics and race.”
Matthew T De Goes (screen capture).
“Some people just don’t want a bad person invited to a tech conference, even if their talk was picked by a blind committee, they are peaceful, they reject any type of violence, and they don’t pose a safety threat.”
Personal Thoughts on the LambdaConf Controversy.

The committee was  blind to any favour or discrimination. How can anyone object to our objectivity? The blindfold came off for a bit though and the LambdaConf committee found out some stuff:
“Are these views racist and sexist? Absolutely, since they don’t admit the possibility that, for example, an asian female with no background in computer science might do a better job at “governance” than any white male software engineer. Are these views endorsed by LambdaConf or held by any staff members? Hell, no!”
LambdaConf-Yarvin Controversy: Call for Feedback.

Having a blind submission process, getting people to sign up to a Code of Conduct and conducting a conference held purely on beliefs is a good ideal to aim at. It’s possible that this could’ve worked for LambdaConf.

In contrast though, the LambdaConf organisers went looking into the background of the speaker, emailed other speakers and held a vote and wrote a few blog posts. It shows a lack of confidence in that processes while undermining it at the same time. Maybe a more fully featured open review process would’ve been better.

Blind reviews do nothing for inclusion or diversity and reinforce existing discrimination: “Does double-blind review benefit female authors?” and “Understanding current causes of women’s underrepresentation in science”. It's like waving the checkered flag at the end of a Formula 1 wondering why only rich people are finishing.

The contradiction of LambdaConf is having a conference that touts its diversity and at the same time inviting someone who is against including certain groups of people. Is Yarvin really the best guy for the job — is he even trying? No.  He just doubled down and justified his views.

In that post, he makes it clear that Yarvin and Moldbug are the same person while saying the exact opposite. He’s saying, if you can’t tell the difference between the two, especially after thousands of words, it is you that has the problem not him. Don’t be confused, he’s blaming you — he’s not coming peacefully.

He says he’s not racist but Moldbug might be (and another). He talks about Carlyle, fascism (“no such thing as too much truth, too much justice, or too much order”), people as property (“we agree that he can sell himself into slavery”), and race is intelligence (“current results in human biodiversity”). It’s a regressive set of ideas — even in its own time:
“The alternative to markets was not socialism. There were socialist experiments, but there were no socialist economies. The alternative to market organization was slavery.”
150 Years and Still Dismal!

The purpose of conference is for networking and to learn. It’s a place where people are going to teach children, and single mothers, parents, and anyone else who comes a long. This will make a difference.

The situation is that attendees will be able to see right through his poor disguise. It makes him a terrible teacher and the conference a poorer place at which to learn. The existence of a speaker, publicised in such a way reduces attendees ability to perform — hurting who you’re trying to help.

Bacon is not good for you and there is no slippery slope. You pick who comes to your conference dependent on the size of the out-group you want to create. Racism and slavery is socially engineered injustice — you’re denying people’s humanity and in that way it reduces us all.

Wednesday, August 05, 2015

Using Ruby to Inject you a Monoid

A monoid has an append operation (like plus) and an identity (like 0) and you get for free a concat operation.

In Ruby it's something like:

[1,2,3].inject(0) {|a, x| a + x }
=> 6

Or just, [1,2,3].inject(:+)

In Haskell, you can even see it in the type signature for Monoid:
mconcat :: [a] -> a

You can see the array on the left ([1,2,3]) and the result unpacked (just 6).

What if you want to take it up one level of abstraction and have any operations on your list of numbers.  You just use a different monoid called Endo.

To take it to this next level you need a more abstract append and identity.  

Append needs to combine two operations:
compose = -> (f,g,*args) {*args)) }

And identity just returns what you give it:
id = -> (x) { x }

Which lets you then write:
[->(x){x + 2}, ->(x){x * 7} ].inject(id) {|x, y| compose(x, y) }.call(8)
=> 58

Or in Haskell:
Prelude> let a = foldr (.) id [(+2), (*7)]
Prelude> a 8


Tuesday, February 17, 2015

jQuery still not a Monad

I read jQuery is a Monad and thought yeah this is pretty cool I finally understand Monads.

jQuery is not a Monad, a Monad can take any type and it has a join operator that takes a doubly wrapped value and turn it into a singly wrapped one. This means that for it to be a Monad jQuery would have to work on any type, you have to be able to give it a String, Int or DOM and it operates on it consistently. jQuery's .map can only deal with the one type. It does have but that would make the Array the Monad (or actually just a Functor) not jQuery.

Many of jQuery's method are specific to DOM manipulation, parsing and the like and not related to Monads in anyway - more like a combinator library like HXT.

The idea that it is a Monad still continues with, What jQuery can teach you about monads and Does jQuery expose a monadic interface?. One of the points that I think people ignore is that JavaScript has an implicit this and it affects how you apply function:
As is common with object-oriented language implementations, the this variable can be thought of as an implicitly-passed parameter, so we can then look through the API for a jQuery container looking for a method that takes one of these transformation callbacks and returns a new jQuery container.
This actually prevents you from easily (and definitely not clearly) writing Moands in JavaScript, Monads in the generic fashion that is required:
So, is jQuery or the Django ORM a monad? Strictly speaking, no. While the monad laws actually hold, they do so only in theory, but you can not actually use them in those languages as readily as you can in, say, Haskell. Methods get the object as the first (implicit, in JavaScript) argument, not the value(s) stored in the object. Methods are not first class objects independent from their classes. You can circumvent those restrictions by implementing some boiler code or, in Python, metaclasses that do some magic. What you get for doing that is a much easier time writing functions that work on all monadic classes, at the expense of making the whole concept more difficult to understand.
As Ed said: "jQuery is an amazing combinator library, but it isn't a functor, it isn't applicative, and it isn't a monad."

Monday, May 05, 2014

Recovering from ElasticSearch Recoveries

We recently had a problem with ElasticSearch's snapshots where a shard (a directory) was failing because it was missing the metadata file and data files.

This leads to a couple of criticisms of the snapshot directory format.  Primarily, it takes files with reasonable extensions, generally Lucene files, and creates files like "__1" and then records a mapping from "__1" to "_def.fdt".  For example:

  "name" : "es-trk_allindices_2014-01-01_0000est",
  "index-version" : 78683,
  "files" : [ {
    "name" : "__0",
    "physical_name" : "",
    "length" : 2012,
    "checksum" : "13m617n",
    "part_size" : 104857600
    }, {
    "name" : "__1",
    "physical_name" : "_def.fdt",
    "length" : 97744833,
    "checksum" : "239wze",
    "part_size" : 104857600

The files aren't event located together in the metadata file.  In Lucene, you have a group of files prefixed with say "_def" like fdt, fdx, tip, tim, del, nvm, and nvd in a single directory.  Losing the metadata file means not only losing the helpful filenames but also their groupings used by Lucene.

Luckily, ElasticSearch uses FDT files which have just enough information - the unique index identifier and then the payload - to turn them into a CSV or other file to be able to be reimport the data into ElasticSearch.  If you have the same problem you will have to force shard allocation or create an empty shard in a new cluster, delete the failed shard and copy that shard into the failed one.

The utility, es_fdr, reads FDT files and outputs them one field per line, it's available on the OtherLevels Github page.  I've also updated a related Lucene ticket.

Sunday, November 17, 2013

Make "Enter" in Twitter Typeahead Select the First Item

This is just a quick post which may not be applicable for long but fixed the problem I had where I was trying to get the first item even if you hadn't selected it with the mouse or cursor.

$('input.typeahead').keypress(function (e) {
  if (e.which == 13) {
    var selectedValue = $('input.typeahead').data().ttView.dropdownView.getFirstSuggestion();
    $("#input_id").val(selectedValue); $('form').submit(); return true;

I appended the same information to this GitHub issue.

Friday, August 23, 2013

Grace Hopper on Programmers and Programming

I've started to read "Show Stopper!" and it has an excellent part in the first chapter about Grace Hopper, who created the first compiler that basically created the jobs that modern programmers perform:
"Hopper was convinced that overcoming the difficulties posed by proliferating computer languages would rank among the greatest technical challenges of the future. "To me programming is more than an important practical art," she said in a lecture in 1961. "It is also a gigantic undertaking in the foundations of knowledge." Ironically, she fretted that the greatest barrier to progress might come from programmers themselves. Like converts to a new religion, they often displayed a destructive closed-mindedness bordering on zealotry. "Programmers are a very curious group," she observed. 
They arose very quickly, became a profession very rapidly, and were all too soon infected with a certain amount of resistance to change. The very programmers whom I have heard almost castigate a customer because he would not change his system of doing business are the same people who at times walk into my office and say, "But we have always done it this way." It is for this reason that I now have a counterclockwise clock hanging in my office."
I would love to know what the name of the lecture was and if there were any transcripts or copies of it around.

Monday, July 15, 2013

Copying between two uploaders in CarrierWave

To copy between two cloud providers using CarrierWave and Fog is a bit tricky.  Copying from one provider to a temporary file and then storing it in the other seems to work but the problem is that the file name is not preserved.  If you wrap the temporary file in a SanitizedFile then Carrierwave will update the content without changing the name of the file.

The following code preserves the file name between providers (where "obj" is the model, "source" is one uploader and "destination" is the other):

def copy_between_clouds(obj, src, dest)
  tmp ="/tmp/tmp", "wb")
    filename = src.file.url, "wb") do |file| 
      file << open(filename).read
    t =
    sf =
To use it:
copy_between_clouds(o, o.old_jpg, o.new_jpg)
You might need to change "src.file.url" to "src.file.public_url" for some cloud providers.

Friday, April 26, 2013

Elliott, Dina and Steve

I was reading "'Memo' Functions and Machine Learning" again.  It's an interesting article, appearing in Nature, before an article about mammalian reproduction, and uses balancing a pole on a trolley as an example of artificial intelligence.

In the paper, the trolley is controlled in real-time by two computers: a PDP-7 and an Elliott 4100.  I hadn't heard of the 4100 before but the Elliott and others come from the start of the British computing industry - including others you may never have heard of like LEO and English Electric Computers. You can read more about them in "Early Computer Production at Elliotts" and "Moving Targets - Elliott Automation and the Dawn of the Computer Age in Britain 1947-1976" (review of the book).

One of the pictures in the Elliott computer archives has the caption, "Switching on the Elliott 405 at Norwich City Council in 1957. The woman to the right is Dina Vaughan (later Dina St Johnston), who did the initial programming for the Norwich system." In 1959, she was the first person to start a UK software house.  This being a company that only wrote software - not software that came bundled with hardware.  The first, according to Wikipedia, was Computer Usage Company in 1955.

The best resource on her I could find was in "The Computer Journal" called "An Appreciation of Dina St Johnston (1930–2007)".  It describes how she was writing software in the mid-50s making her a contemporary of people like Michie, Turing, von Neumann and Godel.  It describes what programming was like then:
"She wrote with a Parker 51 fountain pen with permanent black ink and if there ever was a mistake it had to be corrected with a razor blade. Whereas the rest of us tested programs to find the faults, she tested them to demonstrate that they worked."
One of the first commercial jobs for the company was a control system for the first industrial nuclear power plant.  Her company, Vaughan Programming Services, was visited on the 10th anniversary of the British software industry by "Electronic Weekly":
"The staff in a company run by a woman might be expected to contain a high proportion of women, and this expectation is fulfilled", runs the EW report, "but, unexpectedly, a low proportion of the professionals employed have degrees, and there is no great emphasis on strong mathematical background in the mix of skills used."
The industry norms don't seem to have changed very much.  More details can be found on Google Books by searching, "Dina Vaughan" (or St Johnston).

In "Recoding Gender", Dina St Johnston is mentioned along with another female programming pioneer, Dame Stephanie Shirley.  A refugee of World War II, she entered the software industry as a "late pioneer".  She became interested in programming and got into the computer room by sweeping up chads, "I could not believe that I could be payed so much for something I enjoyed so much...early software was was so engrossing."  In 1962 she started "Freelance Programmers", the second software company founded by a woman in the UK.   Her view of the computing industry seems to be one that offered a way to address social and economic problems, "a crusade", a flexible workplace with policies designed to support women with dependents.  Originally designed to help women with children to continue to work, its charter gradually became more broad to include supporting women's careers, then for women with any dependent and in 1975 was expanded, by law, to include men.  The final mission became "people with dependents who couldn't work in the conventional environment".  She says in her biography, the company had always employed some men and at the time of the passing of the equal opportunities law three of the 300 programmers and a third of the 40 systems analysts were male.

A Guardian article, written in 1964, quoted in "Dinosaur and Co", about Shirley and the early IT industry:
"The main qualification is personality...Much of the work is tedious, requiring great attention to detail, and this is where women usually score...Mrs Steve Shirley...has found in computer programming an outlet for her artistic talents in the working out of logical patterns.
Now retired with a young baby, she has found that computer programming, since it needs only a desk, a head and paper and pencil, is a job that can be done from home between feeding the baby and washing the nappies.  She is hoping to interest other retired programmers in joining her work on a freelance basis."
The difficulties in starting a software company in the 1950s and 1960s seem immense.  There was the idea that you couldn't sell software, that it didn't have any value as a product or a service by itself, as customers expected it to be free with the hardware.  Then there is the inequality and sexism.  She called herself "Steve" as no one responded to her business letters when she used "Stephanie".  Banks also required written permission from a man so that a woman could open a bank account.  Furthermore, almost all companies and the public service required or expected women to leave their job when they married or had their first child, so you "retired with a young baby".  One of the few ways women could continue to work was to start their own company.

She mentions her title was for "services to the industry" and as any good programmer does, she defines Dames: "...recursively by saying, a Knight is a male Dame".  She recently released a biography called "Let IT Go" which includes many personal struggles as well as parts that are a more practical, British version of "Lean In".

You can listen to her talk in "The Life Story of a Pioneer: From Hi-tech to Philanthropy" (the subject of about IT and running a software company begins about 12 minutes in, the second half of the talk is dedicated to her philanthropy mostly for autism).  There's also an earlier recorded video of that talk and others on her University of Oxford page.

The early British IT industry wasn't only about commercialising military projects or solving hardware and software problems but it was a way of affecting social change - to allow more people to work more flexibly.

Thursday, January 31, 2013

Removing Large Files from Git

When I've used git I've used it pretty much CVS, SVN and any other version control system I've used before - I've checked in binary files.  Whether that's dlls or jars or gems I've checked them in.  Pretty much everywhere I've worked people have said this is a problem and I've tended to argue it solves a lot of problems - one of the main ones is when where the repositories and management software along with that fails - I still have a working system from a checkout/clone/etc.

The price of this is that sometimes you need to cleanup old binary files.  Git makes this complicated but once you've found a couple of tools then it's relatively straightforward.

Stackoverflow has a Perl and Ruby script that wraps around a few git commands to list all files in a repository that's above a certain file size "Find files in git repo over x megabytes, that don't exist in HEAD".  The main gist of it is (in Ruby):

IO.popen("git rev-list #{head}", 'r') do |rev_list|
  rev_list.each_line do |commit|
    for object in `git ls-tree -zrl #{commit}`.split("\0")
      bits, type, sha, size, path = object.split(/\s+/, 5)
      size = size.to_i
      big_files[sha] = [path, size, commit] if size >= treshold

big_files.each do |sha, (path, size, commit)|
  where = `git show -s #{commit} --format='%h: %cr'`.chomp
  puts "%4.1fM\t%s\t(%s)" % [size.to_f / Megabyte, path, where]

Then to remove the old files from the repository:

git filter-branch --force --index-filter 'git rm --cached --ignore-unmatch [full file path]' -- --all
git push --force

Then to cleanup any used space in the git repository:

rm -rf .git/refs/original/
rm -rf .git/logs/
git reflog expire --expire=now --all
git gc --aggressive --prune=now

Saturday, December 29, 2012

Transparent Salaries

The stereotype is that developers are notoriously bad at human interactions.  I'd suggest that developers are notoriously bad at interactions that they see as fake.  Things like small talk and negotiations.  In a developers mind or to be honest mine at least, the ability to get paid well or to pay less than retail for a product shouldn't be based on your ability to pretend your friendly with someone you're not, it should be based on an some sort of system.  Why not create a self consistent system over relying on interacting with people?

With this in mind I decided to try to create a transparent system at work to handle salaries.  The problems I see with the way traditional salary is handled, especially the lack of transparency, include:
  • It combines performance with remuneration,
  • Programmers are notoriously bad at valuing themselves, communicating it with others and ensuring that they are adequately paid during a job interview or while employed,
  • It prevents an objective assessment of what your roles and responsibilities are in the organisation,
  • It lacks an acknowledgement of what you skills are worth in the job market,
  • It creates two groups: management and developers.  This allows a combative attitude to be created and is used to justify why developers shouldn't trust business people and management,
  • People tend to find out anyway.
Some of these points I'll admit are difficult to solve whether it's a transparent system or not.  However, the last two points, which I think are especially toxic, can be solved with a transparent system.  In a closed salary system, people are encouraged to secretly find out what other people are worth and to provoke comparisons between each other.  The time periods are often long and the information often incorrect.  If a system is transparent you can solve that problem by making the information accurate and positive.

People tend to ask, "Why does Mary earn more than me?" I think I'm a better programmer/analyst/whatever than she is.  Was it just because Mary started when the company had more money?

Joel Spolsky is probably one of the key influences that I've seen on having transparent salaries.  For example, "Why I Never Let Employees Negotiate a Raise":
"...we knew that we wanted to create a pay scale that was objective and transparent. As I researched different systems, I found that a lot of employers tried to strike a balance between having a formulaic salary scale and one that was looser by setting a series of salary "ranges" for employees at every level of the organization. But this felt unfair to me. I wanted Fog Creek to have a salary scale that was as objective as possible. A manager would have absolutely no leeway when it came to setting a salary. And there would be only one salary per level."
The Fog Creek Ladder is based on the idea of achieving a certain level of capability.  The problem I had with the Fog Creek solution was that it seemed to suggest, especially in the skills level, that a programmer starts off needing help and working together and then slowly achieves the abilities to work by themselves.  Whereas, where I work we wanted to do the opposite - as you get better at programming you get better at being able to explain, to listen and work with others.  I think this is especially important if you want to work in an environment with living architecture.

So the inputs are simply what do you do - this should be objective and easy to see (again we're assuming a more transparent work environment where work is checked in or on a Wiki - if it's not shared you haven't done it).  It's assumed that you perform well - if you're not, you're not doing your job.  You can argue your role and performance separately to salary as these are assumed correct coming in.

The other input to this is local salary.  As Joel has said, if salaries rise quickly or fall sharply then the employees' salary should too.

With this is mind there were three factors we used to calculate salary:
  1. Experience (4 bands - 0-2 rating),
  2. Scope of Responsibility (0-5 rating) and
  3. Skill Set (0-5 rating).
Experience has the least weight and is geared heavily towards moving from graduate to intermediate (three bands over 5 years) and maxing out after 15 years.  

The scope of your responsibilities starts with the ability to make small technical decisions, to libraries used, and finally to cross product decisions.  This doesn't mean that we have architect roles though, it means that if you are making these decisions that's what you get paid, not the other way around.

Skill set is pretty much technical abilities with an emphasis on the ability to break work up into different levels of tasks.  Being able to create tasks from features, feature from iterations, iterations from epics, and charting a course across product cycles and customers.

The next part is how to we find an objective measure of salaries to match the levels we've created.  I found a Queensland salary guide:
Software Junior Intermediate Senior
Analyst Programmer - J2EE $60,000 $90,000 $110,000
Analyst Programmer - MS.Net $60,000 $90,000 $120,000
Analyst Programmer - Other $60,000 $85,000 $110,000
Applications / Solutions Architect $100,000 $140,000 $180,000
Team Leader - J2EE $90,000 $108,000 $117,000
Team Leader - MS.Net $85,500 $100,000 $122,000
Team Leader - Other $81,000 $90,000 $99,000

The main problem with these guides is the lack of acknowledgement of cross functional abilities.  They tend to break out employees by traditional titles like: system administrator, database administrator, support roles, architect and programming.  These are all roles that I expect everyone to be able to do.  We picked the highest programmer category (MS.Net) but you could argue that it would be higher based on ability to handle iterations, customers and architecture (so maybe between $60,000 and $180,000).

Our version of Joel's ladder:

Experience Average of Scope and Skills
0 1 2 3 4 5
Graduate 0 1 2 3 4 5
Junior 1 2 3 4 5 6
Intermediate 1.5 2.5 3.5 4.5 5.5 6.5
Senior 2 3 4 5 6 7

The maximum score is 7 with the base values starting from your experience (0-2).

So our "developer" salary was:
Graduate Junior
$40,000 $60,000 $90,000 $120,000

With each point (from the previous table) being weighted at $11,400 ($80,000 difference / 7 points) which means that if the points came out to a non-whole number you can calculate between those grades - a 6.3 would be $111,820 ($40,000 + 6.3 * $11,400).  What might be a bit confusing is that $40,000 is really the minimum and $120,000 is the maximum.

Overall I think this is a better system than negotiating up front and then at regular intervals (usually before or after a project).  It reeks of an up front heavy process.  I wonder if it really needs to be?  Salary seems to be one of the last things that isn't considered a continuous process - like most things in software development now are.  You turn salary into another feedback process by making it transparent.  Could you turn salary into an iterative process?  Could you iterate on it more quickly than yearly, to possibly monthly or weekly?

While the inputs are supposed to be objective you can't say this process is value free.  We've made choices over what we think is more important.  As with many of these processes getting agreement maybe harder than setting up the initial process.  This might as hard as trying to retroactively apply a coding standard.

The only negative I can think of is if you're a person (especially in business) that believes that everything is a negotiation and don't leave anything on the table.  This is where I think the developer vs business idea comes in.  I think it's an overall cultural negative - especially if these are the same people who are creating customer contracts and the like.  As a developer you want to work with your customers and business people.

Update: "Psst...This Is What Your Co-Worker Is Paid":
Little privacy remains in most offices, and as work becomes more collaborative, a move toward greater openness may be inevitable, even for larger firms...But open management can be expensive and time consuming: If any worker's pay is out of line with his or her peers, the firm should be ready to even things up or explain why it's so...And because workers can see information normally kept under wraps, they may weigh in on decisions, which can slow things down, company executives say. 
Once employees have access to more information, however, they can feel more motivated.

Tuesday, October 02, 2012

Imagination Amplifier

This is a reproduction of an article that appears in Compute's Gazzette, Issue 59.  I'm reproducing it here because I think it's particularly good and on appears in formats where it's unlikely to be found again. Alternative link to his November's COMPUTE! article.

Worlds Of Wonder - WOW!

In this month's mailbag I received a letter from Art Oswald of Goshen, Indiana. Art was responding to my article in the November COMPUTE! magazine about computers of the future. He wrote: "In the future, the phrase 'I wonder' will become obsolete. I won't have to wonder what would happen if, or wonder what something was like, or wonder how something might be. I would just ask my computer, and it would simulate by means of holographic projection anything my imagination could come up with."

Now, I ask you, Art, is this something to look forward to or something to dread?

I have a new science-fiction book coming out which deals with this subject — the effect of computers (and electronic media, in general) on the human imagination. The book is Robot Odyssey I: Escape from Robotropolis (Tor Books, April 1988). Listen to two teenage boys carrying on a conversation in the year 2014:
We think plenty using computers, but we don't imagine. We don't have to imagine what the fourth dimension is, or what will happen if we combine two chemicals, or what the dark side of the moon looks like. The computer is there a step ahead of our imagination with its fantastic graphics, cartoons, and music. We no longer imagine because the computer can do our imagining for us. 
"So why imagine?" Les said. "My pop says most people's imaginations are vague and fuzzy anyway. If the computer imagines stuff for them, it'll probably be a big improvement.
Les is right. If the computer "imagines" something, it is usually based on a database of facts, the vision of an artist, or a scientific model created by experts. How could our puny imaginations compete with images that are this inspired, detailed, and exact?

Frontiers Of Knowledge 

Science-fiction writers think a lot about new worlds of wonder. It is the human desire to "go boldly where no man has gone before" that is among our more noble impulses. It may even be the "engine" that drives us to innovate, invent, and take risks. Without this engine, we might sink into a kind of emotional and intellectual swamp. Life could become extremely boring. Every time we contemplated a decision, we would first ask our computer, "What if?" and see what the consequences might be. Knowing too much might even paralyze us and cool our risk-taking ardor.

Imagination Amplifiers

Art writes that the phrase I wonder may be rendered obsolete by computers, but I'm not certain that he's right. Instead, I think that we could use computers to stimulate our imagination and make us wonder about things even more.

Where does our imagination come from? I picture the imagination as a LegoTM set of memory blocks stuffed into the toy chest of our mind. When we imagine something, we are quickly and intuitively building a tiny picture inside our heads out of those blocks. The blocks are made up of images, tastes, smells, touches, emotions, and so on — all sorts of things that we've experienced and then tucked away in a corner of our minds. The quality of what we imagine depends on three things: how often we imagine, the quantity and diversity of blocks that we have to choose from, and our ability to combine the blocks in original — and piercingly true — ways.

Most of us have "pop" imaginations created from images supplied to us by pop culture. We read popular books, see popular movies, watch the same sitcoms and commercials, and read the same news stories in our newspapers. It's no wonder that much of what we imagine is made up of prefab structures derived, second hand, from society's small group of master "imagineers." Electronic media has made it possible for these imagineers to distribute their imaginations in irresistible packages. If you have any doubt, ask an elementary school teacher. Her students come to school singing jingles from commercials and write "original" compositions which really are thinly disguised copies of toy ads, movies, and Saturday morning cartoons.

Where does the computer fit into this picture? It could be our biggest defense against the imagination monopoly which the dispensers of pop culture now have. If we can tell the computer "I wonder" or ask it "What if?" it will work with us to build compelling images of what we imagine. If the process is interactive, and we can imagine in rough drafts, then we can polish, ornament, and rework our images as easily as a child working with sand on a beach. Then maybe the images inside our heads will be from imagination experiments that we do with our computers and not stale, leftover images pulled from the refrigerator of pop culture.

Fred D'Ignazio, Contributing Editor

Thursday, September 20, 2012

Not Much Better

I've been reading, "A Question of Truth" which is primarily about homosexuality in the Catholic church and references to it in the Bible. It has a lengthy, careful but very easily read introduction, explaining many things to do with currently held views, the difference between the acts from intents, the damage it does to people and carefully describing the different aspects of sexuality, separating all the issues well and does a reasonably good job of describing the difference between intensional and extensional usage.

A lot of this is Bible study 101 - the modern ideas like love, homosexuality, marriage, property, slavery, and so on have moved or did not exist when the Bible was written, so what people often read into it is not the original intent - not that I would say that the original intent is much better - and that's the real problem.

The book effectively reasons around all the major passages that people use to treat gay people badly. However, in the course of the reasoning, it just seems to move away from from treating homosexuality as sinful to refining women's historical position in society.

For example, the infamous Leviticus and men not lying with men passage is reasoned to mean not the act that is wrong but that a man shouldn't treat a man like a woman. Another is the story of Lot and our friends the Sodomites, which again is about offering up your daughters for hospitality reasons and the suggestion is that Sodom was destroyed because they humiliated them not because of any need for gay love.

There's a sentence or two along the lines that no modern Christian would treat women in this way (or have slaves?) which I thought rather undermines the whole point of the exercise to me.

Friday, May 18, 2012

Constructivism - Why You Should Code

I think this article on why you shouldn't code is wrong. It's wrong in a way that I was wrong in high school that I would never need to know German, art or biology. It's wrong in the way I was wrong about never needing to know set theory or relational theory or category theory. But it's also wrong in the ways I will never really know, "Computer As Condom":
Debbie hated math and resisted everything to do with it. Tested at the bottom of the scale. She learned to program the computer because this let her play with words and poetry, which she loved. Once she could write programs she found a way to tie fractions into words and poetry. Writing witty programs about fractions led her to allow herself to think about these previously horrible things. And to her surprise as much as anyone's her score on a fractions test jumped into the upper part of the scale.
What you do as a job programming in C#, Java, JavaScript or whatever has very little to do with the way people use coding to learn about learning. That's the most disappointing thing about the article. It is the terrible idea that learning how to code lessens the world if you do it wrong. Learn to code backwards in time in Latin in Perl but don't listen to anyone who says you shouldn't code.

Monday, May 14, 2012


I just finished a study group on Learn You a Haskell for Great Good.  Which was a great experience, for many reasons, but I think the way each session was structured into a combination of lecture and tutorial deserves particular attention.

The weekly structure was fairly straight forward: a chapter leader covers a chapter the week before the rest of group, writes a summary and some programming questions.  The weekly sessions took about an hour and a half.  This consisted of the chapter leader going through their summary allowing the group to interject with questions and answers (if the chapter leader didn't know) or there might be some furious Googling to find a good reference or answer that someone half remembered.  The programming questions and answers would usually go around the table, each person would answer a question and the others would then comment on it or show their answer if it was particularly different (or shorter or whatever).  The time was roughly 60/40 from lecture to programming/tutorial.

Compared to university courses, where you often had two hours of lectures and then one or two hours of tutorials often spread out over a week, this arrangement seemed to be very time efficient.  The other advantage was getting the students to run the study group.   The chapter leader has to spend a lot more time making sure they understood the chapter in order to answer any questions that would come up during the review and to set the programming questions.  For me, setting the questions and making sure you had answers (and by the end of it tests to help people along) was probably the best part of the learning experience.  There was no real hiding if you hadn't done the answers either - partially because it was such a small group but also because of the high level of participation.

It'd be interesting if there were university courses where you were graded not just on an examination and assignments but the questions you set and if you were able to run a small group of people through a class.  It would also make tutorials more relevant which are often dropped by students.

It seems "lectorial" also means, "large tutorial in a lecture hall to give context around information given in lectures".  They also mention small group activities and class lead presentations so there is some overlap.