Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

A personal lifelog of visual information can be very helpful as a human memory aid. The SenseCam, a passively capturing wearable camera, captures an average of 1,785 images per day, which equates to over 600,000 images per year. So as not to overwhelm users it is necessary to deconstruct this substantial collection of images into digestable chunks of information, i.e. into distinct events or activities. This paper improves on previous work on automatic segmentation of Sense Cam images into events by up to 29.2%, primarily through the introduction of intelligent threshold selection techniques, but also through improvements in the selection of normalisation, fusion, and vector distance techniques. Here we use the most extensive dataset ever used in this domain, 271,163 images collected by 5 users over a time period of one month with manually groundtruthed events. ©2008 IEEE.

Original publication

DOI

10.1109/WIAMIS.2008.32

Type

Journal article

Journal

WIAMIS 2008 - Proceedings of the 9th International Workshop on Image Analysis for Multimedia Interactive Services

Publication Date

19/09/2008

Pages

20 - 23