This is the complete archive of posts from Inductio Ex Machina in reverse chronological order.
Despite the jet-lag and the extreme heat, I had a very enjoyable time at COLT this year. This is a summary of some of the highlights for me as well as a list of work I saw that I'd like to investigate further.
Spreading the word about the research track at the PAPIs conference that I am helping chair this year.
After an unintended break of more than a year, I've decided to start up this blog again.
A summary of a recent discussion between the JMLR Machine Learning Open Source Software (MLOSS) Action Editors about what “open” means.
An introduction to and survey of some interesting results about Bregman divergences.
A brief description and discussion of Zhu et al.'s RegBayes framework for generalising Bayesian updating.
Spreading the word about a number of machine learning research jobs at NICTA.
A brief operational note on how and why I shifted my site and this blog from Jekyll to Hakyll.
Looking at some data on gun-related deaths and gun ownership worldwide in the wake of the Sandy Hook shooting.
The facts behind this mysterious Twitter account can finally be revealed!
The ICML discussion site rides again, this time integrated into the conference site itself.
A brief note spelling out a key relationship in information geometry.
A short note describing the Prediction With Expert Advice game and why it is a special case of Online Convex Optimisation.
I am co-organising a workshop at this year's NIPS to look at how we might better understand machine learning problems by examining their relationships to each other.
The second version of my alternative AI project has been released, code name Edith Valerie Reid.
A brief note about the ML Discuss site for ICML 2011.
I recently discovered that a result concerning probability estimation in one of my recent papers was already observed by Lindley 28 years prior.
A parody of the Serenity Prayer for those working with Bayesian inference.
An attempt to set the record straight about the role of generalisation bounds in polite society.
A look at an information theoretic inequality that is useful for establishing lower bounds for minimax risks.
I am advertising for a fixed-term postdoc to join my Structures and Protocols for Inference project.
A brief discussion and proof of this very elegant and powerful result of Banerjee's.
Some promotion for the new StackOverflow-like question and answer site for machine learning, NLP and computer vision.
Some advertising for the revamped ICML discussion site I built.
Thoughts on a number of recent prediction services including the Google Prediction API.
I will be attending AISTATS 2010 and presenting a poster on a characterisation of the convexity of composite binary losses.
I'm a local organiser for two upcoming international conferences. This is a shameless bit of promotion.
Some links and brief notes about a recent talk I gave to the Canberra Java User's Group.
In an attempt to better familiarise myself with online learning and Clojure I implemented the former in the latter.
I attended both ICML and COLT this year. This is an overview of what I thought were the most interesting talks.
Robert Williamson and I have had two papers accepted at ICML and COLT 2009. They are both about bounds -- one for surrogate losses the other for f-divergences.
Noting the passing of one of the big names in Bayesian statistics with a discussion some of his work I am personally familiar with.
Ken Binmore gives a short Bayesian explanation of why the usual Argument By Design for the existence of God only reinforces existing beliefs.
This is the second part of my attempt to port the Minilight ray-tracer to Clojure. This time it is triangles. Some bugs are found and fixed in the vector package.
In an attempt to learn Clojure I am translating the minilight ray-tracer. In this first part I build and test a simple 3D vector package.
The lectures I gave at MLSS 2009 in Canberra are now up at videolectures.net.
I have recently been experimenting with Clojure and here I document how I have set up my work environment.
Ada Lovelace day aims to “draw attention to women excelling in technology”. Here I highlight a few women in machine learning whose work I admire.
Some Friday afternoon philosophising on the place of machine learning within the larger disciplines of Artificial Intelligence and Intelligence Amplification.
In machine learning, bias is what allows for generalisation beyond observations. Without it, learning is not possible, regardless of how much data is available and what certain Wired reporters believe.
An overview of some properties of conditional, or point-wise, Bayes risks for proper losses.
This strangely named principle from Binmore's book, “Rational Decisions”, has an unusual take on the axiom of choice and its implications for probability.
Probability estimation is an important class of problem in machine learning. In this, the first of a series of posts, I discuss a natural class of losses for these problems.
March is the World Blogging Month. I plan to take up the challenge and write a blog post here every other day in March.
A very counter-intuitive result that highlights the danger of reasoning about higher dimensional space by analogy with lower dimensional ones.
A notice that I've moved this blog to a new domain. Please update your feed readers.
Wherein I compile a list of interesting people who use Twitter to discuss machine learning and statistics.
A summary of a recent paper Bob and I posted to arXiv.
A plug for the 2009 Machine Learning Summer School in Canberra, Australia. I will be giving a presentation there.
Unsatisfied with the very algebraic and formal proofs of Jensen's inequality, I present a diagram that gives a graphical intuition for the result.
Present the results for my latest and greatest attempt at creating an intelligent machine.
A quick summary of a paper in Nature last year that analyses the rate at which words shift from irregular to regular.
A review of the book _Super Crunchers: Why Thinking-By-Numbers is the New Way To Be Smart_ by Ian Ayers.
Looking back on a year of research blogging about machine learning.
Some thoughts on Hardin and Taylor's paper "A Peculiar Connection Between the Axiom of Choice and Predicting the Future".
A quick summary of some of the best talks and papers at COLT 2008 in Helsinki, Finland.
Some thoughts on the workshop on evaluation methods that I attended as part of ICML 2008 in Helsinki.
A brief note describing the site I set up for ICML 2008.
A description of a visualisation of some 19th century Australian borrowing records from the Australian Common Readers Project.
A simple example involving irrational numbers makes me think that constructive mathematics has something going for it.
In response to a post by Peter Turney, I list the books I feel shaped my research career.
Discussion of the point-line duality between Drummond and Holte's cost curves and ROC curves. An applet is provided to help visualise this relationship.
Some sage advice by Jacob Cohen on hypothesis testing and p-values.
A brief overview of an RSS archiving tool I whipped up in ruby.
A summary of a sequence of papers in JMLR that discusses an interpretation of boosting.
A collection of sites around the web that catalogue a wide variety of data sets that may be useful to machine learning researchers.
A follow-up to John Langford's discussion on how mathematics can be misused in an attempt to improve the chance of publication.
An overview of how I use the free Mac application BibDesk along with the online social bibliographic service CiteULike.
A quick post on the use of determinants to define convex functions.
An overview of convex analysis and the Legendre-Fenchel transform by Hugo Touchette proves very useful.
A plug for the workshop talk I'll be giving at NIPS 2007.
Doing mathematics sometimes feels like playing a piece of interactive fiction.
A summary of an interesting talk by Justin Bedo which shows that learning can sometimes go very wrong - and how to exploit it.