This is an archival copy of my PhD blog, which was active between 2009–2015. I'm publishing it again as a personal time capsule, but also because I think it's an interesting documentation of the PhD process itself, which might be useful to someone, somewhere. – Chris Marmo, January 2026

Nokia Lenses

The need to make sense of an ever-increasing stream of data has never been greater, especially in a mobile context. At Mobile HCI’10 last year, a team from Nokia research presented a prototype solution for feed aggregation, dubbed Lenses. This allows people to curate their own stream of content relevant to different purposes – different self-generated contexts.

From the paper:

Lenses explored the need for a universal inbox and the value in interacting with fine-grained filtering mechanisms of content on a mobile phone… [It represents] an alternative to the application paradigm where people can engage with their data using different perspectives such as topics of interests, tasks, or roles in their life.

How could these be more closely tied to location, and made sensitive to context?

Context: awareness vs sensitivity

I’ve been doing a lot of reading and writing around context awareness the past couple of months – so much so that I changed the subtitle of this site to include it. It’s safe to say that the notion of this kind of awareness completely captured my imagination, or at the very least, led me to line up a whole stack of journal articles and books on the topic.

With the plethora of location-based applications appearing on various mobile platforms, the ubiquitous nature of geo-tagged data and the popular medias seemingly undying thirst for the latest tech-innovation, location enjoyed a pretty good ride in 2010. Starting at location as a focus of research (as I did), it’s not long before you realise that a coordinate is just one piece of metadata that can describe context, and it seems like a natural progression to begin thinking about broader notions of the term.

The next thing you realise after reading all about the current attempts at context-awareness is that, well, they suck fail to be all that useful.

There are many very intelligent systems-based frameworks for building an architecture of sensors that can detect where and what you’re doing, and very detailed examples of software implementations that aim to interpret this sensor-based data to assist their users. It’s not that these frameworks and implementations are poor or under-thought, it’s simply that the technology isn’t there yet, and our expectations are too high.

Great Expectations

This isn’t our fault though – the term “aware” is loaded with expectation. It immediately conjures notions of Asimov-type robots that basically act and understand as we do – of computational uber-humans superior to us in every way – and ones that we will either grow to love or fear completely.

The problem is hinted at above – in the interpretation. Whilst we might have sensors that can pinpoint you on a map, know who you’re with, whether you’re talking or not, walking or not, whether you’re standing, sitting, or lying down, the problem lies in the translation of this sensorial information into meaningful, and accurate interpretations for software to use.

The optimist and sci-fi fan in me thinks that, one day, we will see a convergence of sensor technology and artificial intelligence that will provide useful scenarios to people. You might argue this happens already – a pilot’s cockpit springs to mind. But the fact remains that the detection of meaningful, dynamic and social context is a long way off.

Context is socially constructed

I’m working on a longer article on this at the moment, so I won’t go into too much detail. It is worth noting however, that whilst the cockpit of a plane is a highly controlled environment where all variables are know, much of what we would define as context is socially constructed. That is, its existence is fleeting, and only arises out of interaction between people, objects and the environment.

Whilst we may be able to detect your location fairly accurately, the context to your presence there is very difficult to detect. Test this next time you’re in a cafe – note all the different activities that are taking place there. The animated conversations, the quiet reading, the anxious waiting, the scurrying (or bored?) staff. For each of these actors, the place holds a completely different meaning for them – and hence, a different context.

Context Sensitivity

So if we can’t rely on technology to sense and interpret that kind of context, then what can we do? Well, I’m not sure I have any answers to this, but I would suggest that we first lower the expectations of and burden on our technology. When compared to “awareness”, a word like “sensitivity” seems much more realistic. We can’t do Bicentennial Man just yet, but what we can do is make intelligent assumptions about when, where and how our technology might be used, and we can selectively use sensed data to inform the design of our applications.

That is, I believe it is the role of design to augment the technology – instead of relying on technology to give us context awareness, we should rely on design to give us context sensitivity.

A Refresh

It’s been 12 months since I started my PhD, and it’s probably a good time to take stock and figure out exactly where I’m taking things. I wrote an original abstract after our initial visit and talks to park rangers, and have had it stuck to my wall since. Today, finally, it bugged me – so, I thought it was time to present to you a new one; followed by some of the major changes in the stances I’m taking.

Read the new abstract here.

So… what’s changed?

Context awareness

The major difference is the change from “location based” to “context aware”. Whilst location is still a large part of the research, location is just one variable in the broader notion of context – location isn’t enough when it comes to understanding the knowledge people have about somewhere; the meaning people attach to the raw “space” – a GPS coordinate- provides far greater meaning to the knowledge that is used and created there.

This notion of a raw location augmented with social meaning is referred to as “place”, and it is this socially constructed notion of location – not an x,y coordinate – that I will use as the core meaning of the term “context”.

Facilitation

Following from the idea of “social construction”, I’ve made it a point to explicitly state that whatever I design/build will not aim to interpret and provide meaning itself, but will consist of services and interfaces that allow people to construct their own interpretations of data, and to communicate it with others. The idea of knowledge as a social object is important here, as is literature around communities of practice and situated cognition. It’s more meaningful if you let people discover things for themselves.

Removing tacit

I’ve taken the word “tacit” out of the abstract – not because I won’t be dealing with it, but because I’ve wondered if it’s too limiting. The idea of facilitating knowledge discovery and creation is still directly related to tacit knowledge – the kind that cannot be easily gained or taught – and I feel that this implied direction is enough without explicitly stating I’m going to solve the world of it’s tacit knowledge problems.

The social life of knowledge

I’ve also deliberately used words to describe the cycle knowledge goes through in it’s social contexts – retention, generation, and communication. The system should facilitate all three activities equally, and with as seamless a transition between them as possible.

And next…

This refocuses the project somewhat – now, to actually make something.

OZCHI Presentation: Location and Context

On Friday I gave a presentation in one of the last sessions at OZCHI in Brisbane, Australia. It covered some familiar topics that I’ve presented on before, but also contained new elements of what location as context might mean.

The paper is available in the OZCHI proceedings, and I’ll upload a version here once I get access to a decent internet connection.

Augmenting the new with the old

There seems to be a recent trend towards augmenting new services with nostalgic versions from less tech-y times – the above taken on an inner city suburban street in Melbourne; it invites people to take a token and SMS the code. Just because you can pull out the technology, doesn’t mean you should.

Intel’s context-aware vision

A few weeks ago Intel CTO Justin Rattner gave a keynote speech on Intel’s vision for context-awareness. The opening video is a little cheesy, but it shows just how important a problem the notion of context has become to technology (and the companies most involved in it’s creation). Most of the examples shown are around intelligent recommendations – mobile phones that pick’n’mix information from various applications running inside them. Scenarios show applications making food and sightseeing suggestions, and reminding you to bring an umbrella because it might rain soon.

The one that struck me the most was a remote control that built user profiles from the way you press it’s buttons.

Still, I find this vision lacking something. It’s all about what can be computationally sensed. What about context – as much of it is – that is created dynamically, fleetingly, and between people?

The video is worth a watch: