We are moving everything to the web: stores, communication, applications, documents. Some of those moves really worked, mostly things that used to have physical boundaries.
But content on the web is lacking.
The web resume
The promise of the web resume is fantastic. Greater visibility, embedded microformats, interactivity, updates, and more.
Instead, chances are, your current resume is a Word document coupled with a PDF lying around in your filesystem. You update it every once in a while and send out the current version whenever somebody asks.
You might have a LinkedIn profile, but you probably don’t pass it on to employers. You might be getting a few approaches from head hunters, but the opportunities haven’t been right. You’re not entirely sure how to fit your experience in the LinkedIn format.
All these promises aren’t working out, but you don’t care. The PDF just looks better and it will look the same when you print it out.
Web apps are increasingly, in theory, able to replicate the features of native apps. They needn’t pass through a market, are easy to update, etc. However, native apps still dominate on mobile.
There is something rigid and beautiful about a native app, that the web experience cannot replace (Sun gets very close, but I’m sure only at the cost of some serious man-hours).
The beauty of PDF
I find myself equally thorn about getting rid of my PDF resume. There is nothing special about it, it’s a simple tabular design, but the LaTeX typesetting and ease of editing are two features I cannot give up.
See, having only a web version means maintaining a print stylesheet, but getting LaTeX quality from a print.css is a challenge.
That elusive finish
There are examples that demonstrate that the web can be beautiful and fast, and powerful. But those examples have taken a lot of effort. Writing an app or a resume, you want to think about the content, not making sure it doesn’t disappoint on a browser. Because by default it does.
Lack of constraints
Part of the problem is that the web really can be a solution to anything. We can build a cute tool for list-menu and button fill web apps and we can build a stellar typesetting environment. Can we design for everything? Or are we forced to leave design to the web author?
Good design and quality is much harder to give up than we think. It could be done through new frameworks for the web, like we have seen impress.js and reveal.js provide some powerful competition to desktop presentation software. Ultimately though, the web needs to be beautiful by default. Like Cocoa, like LaTeX, like Keynote.
Disclaimer. This post was inspired by a conversation with @nickbarnwell.
After reading Conal Elliott’s recent Reimagining Matrices post on defining matrices as inductive linear maps, I was inspired to try out denotational design too. Daniel Rowlands pointed out that Conal’s approach is tensor-inspired and further generalizable, so I wanted to find out what that meant.
I set out to define tensors as nicely as I could. I didn’t end up needing to do equational reasoning, but the notion of having a type-level representation and a semantic function fit the domain well.
The definition I’m using is that, for a given vector space over field F (say real numbers), a tensor on the space is a multilinear map from some number of vectors and dual vectors to F. This raises the notion of type, where a tensor of type (n, m) accepts m vectors and n dual vectors.
A little bit of algebra shows that tensors can be represented with n vectors and m dual vectors. You can combine a dual vector and a vector to get a value in the field, so you need to match the numbers of vectors and dual vectors and take the product of the values.
This is a meaningful representation mathematically (taking tensor products of vector spaces and dual spaces), but we are content it gives us an inductive definition of tensors. We have a TUnit wrapper that gives a tensor from a vector and a TCoUnit wrapper that gives a tensor from a dual vector. We combine these using TProduct.
My first go is having the type as implicitly computed variables, but given you are using classical tensor operations, tensor type can be computed at compile-time. So I also tried to piece up a version using the shapeless library, its natural number types and sized collections. With help from Travis Brown on StackOverflow, I got it to work. I used Double for the field for simplicity and V as an arbitrary vector space over Double.
The code is below and includes a simple test for the typed version.
An extended title would read “temporal logic programming with explicit discrete time”. Temporal logics allow reasoning about time-varying propositions. By using a relational arithmetic implementation of time, we can to a limited extent express the core temporal operators in core.logic:
- next P, which means P is true at the next moment of time
- eventually P, which means P is true now or at some time in the future
- hereon P, which means P is true now and at any time in the future
While I have used explicit time, I hope it is clear to the interested reader that this could readily be made into a miniKanren extension where all relations have implicit time. Some non-relational operators have been used for the examples, but an actual implementation involving the three temporal operators can be made relational, respecting some restrictions of context. The examples and the implementation was inspired by Templog, which is a temporal Prolog variant.
Yahoo was the first page on the internet I learned to use. It pleases me to see it going through a revitalization period. I expect them to be coming up with some exciting product changes and introductions soon. Before we see them commit to something though, I wanted to think with an open mind about what exactly Yahoo should be aiming for. I wanted to borrow the opinionated style of Andrew Kim, but digging slightly deeper. I not only went through product pages and Yahoo history, but also some of its developer and research pages to only suggest things that are consistent with Yahoo’s intellectual, infrastructural and artistic resource. Here are my notes from the process:
These strengths taken into account, Yahoo has a lot of separate product efforts that are tightly related. As an example, services like Pipes, myYahoo, alerts, and all aspects of news are all about up to date information. What if you had the power of Pipes and the accessibility of News?
Reading then about Yahoo’s streaming Hadoop job infrastructure and deep understanding of advertising and economic data, I think Yahoo is perfectly placed to deliver a dashboard solution. We need a big player to deliver this at scale, and none of the other giants have taken the leap. Google Now is a poke in that direction, I suppose, but it’s push-based and does not have an API for delivering custom information. Business and consumers should not have to roll out custom dashboard solutions much like they don’t provide their own e-mail clients.
I call it Horizon, to suggest access to what’s on the other side of the globe, i.e., beyond the horizon.
An HTML5 based, thoughtfully designed visualization element library would allow integration with others vendors’ and individuals’ data. A market for data, if you will.
Using Edward Tufte’s visualization principles, I chose a monochromatic, text and line heavy design where properties like size, shape and brightness directly reflect information novelty and importance.
Horizon scales from big screens to watches.
This would be relatively “safe” product, filling a clear gap in the market using very familiar technology. For the more distant future, I think Yahoo should innovate in interactive learning. It’s known for combining expert knowledge and insight with computer delivered data. Yahoo always had a bit more humanity to it than some of its competitors and that should be the bottom line going ahead.
Presented this at the Data Date Meetup in Riga earlier tonight. Program Induction is the act of generating programs. This is an example of a naive search in Haskell for arithmetic programs of one variable. It’s an exciting and growing field of AGI, and I expect big things from it.
Gave a talk last Thursday on Logic Programming at HNLondon, here is the video and the slides.
Wrote a miniature on leveraging arrows for composing probabilistic mappings. The arrow of probabilistic mappings.
I want to talk a little bit about the bootstrap principle. The book “Artificial Beings” by Jacques Pitrat helped me realize its ubiquity.
Programmers will be familiar with bootstrapping compilers. Have you heard of a C compiler being written in C? That makes a lot of sense — if you believe a language is great, you want to write its compiler in the same language. On another hand, isn’t it paradoxical? If we need the language to compile itself, how do we compile the compiler in the first place? Well, we resolve this by adding version information. You can use C 1.0 to write a compiler for C 2.0 And you will have compiled C 1.0 on a compiler written in C 0.0. But then how do you compile C.0? Just write the first compiler in assembly, or FORTRAN, or whatever was available. So treating different versions as different languages, bootstrapping is really just a cheat.
Evolution is another example of bootstrapping. Let’s do something grand. Let’s resolve the chicken and egg problem. The chicken and egg problem ponders what came first — the egg or the chicken. The paradox here is that to produce a chicken egg, you need a hen (that a chicken grows into) and to produce a chicken, you need a chicken egg. Now let’s look back at a version model. Let each chicken be assigned a version number. The first “chicken” that produced the first chicken egg 1.0 might have just been chicken version 0.999999. If you then need to know whether the egg was first or the chicken, it is reasonable to assign egg the same version number as the chicken that it produces. Then egg 1.0 came before chicken 1.0. And that’s it.
The problem is in our discrete labels on different species, they are sufficiently similar for us not to treat them as individuals for some purposes (same for minor editions of a programming language). However, we wouldn’t be able to draw exactly what the first chicken looked like or the first homo sapiens as those are fuzzy boundaries, naively applied. Aristotle never realized this as he wouldn’t step away from the discrete categories:
If there has been a first man he must have been born without father or mother – which is repugnant to nature. For there could not have been a first egg to give a beginning to birds, or there should have been a first bird which gave a beginning to eggs; for a bird comes from an egg.
Can we apply this somehow? A sample chicken & egg problem is social networks. To get a large user base, there needs to be good growth, but viral growth is proportional to the size of the user base. The solution (apart from paid advertising) is incremental growth. Few more users, little more growth, more users, etc. Incrementally versioning the user base and the growth trend.
So there you have it. Whenever you find a chicken and egg problem, start thinking more continuously and replace fuzzy boundaries with variables and versions.
For those who are interested in the inspiring example from the book, here it is. The author argued that the best first problem area for an artificial general intelligent being is creating artificial general intelligence (AGI). That would enable the being to grow and extend itself incrementally. The hypothesis being that AGI is just AI 3000.
Realized prolog’s dynamic assertions can be used to implement simple probabilistic programming with rejection sampling. Can probably be made much prettier, perhaps adding free variables as arguments of
output or using clause manipulation. But this works.
What we are doing here is generating X and Y from two random dice rolls and adding them up. Then we want to sample some values of the sum given that rolling X resulted in a 2. We could just set X to 2 and sample Y only, but I’ve tried to do it right allowing, if needed, for X to depend on other random variables.
For programming language junkies only:
PAKCS or the Portland Aachen Kiel Curry System is a state of the art Curry implementation. And Curry, well, is the standardized functional logic programming language (think Haskell + Prolog).
Trying to set it up was a pain as it didn’t work on Mac for some reason. Pretty straightforward in a vagrant linux VM though.
vagrant box add precise http://dl.dropbox.com/u/1537815/precise64.box
vagrant init precise
sudo apt-get install haskell-platform
sudo apt-get install swi-prolog
tar -zxvf pakcs_src.tar.gz
PATH="$PATH:~/pakcs/bin" to your
.bashrc and run
pakcs. You should see this:
______ __ _ _ ______ _______
| __ | / \ | | / / | ____| | _____| Portland Aachen Kiel
| | | | / /\ \ | |_/ / | | | |_____ Curry System
| |__| | / /__\ \ | _ | | | |_____ |
| ____| / ______ \ | | \ \ | |____ _____| | Version 1.10.1 (2)
|_| /_/ \_\ |_| \_\ |______| |_______|