Designing for deliberation

I’ve been pulling together bits of background reading over the last two weeks, particularly in the area of interface design for e-participation and designing systems for deliberation. This is far from an exhaustive list of papers, but I hope I have covered the main points regarding design for deliberation and e-participation.

An important starting point for  electronic enhancement of deliberation may be seen as far back as 1970 with Kunz and Rittel and their IBIS collaboration framework, then Conklin & Begeman in 1987 developing a GUI on top of IBIS (gIBIS) – an early example of the harnessing of technological advances to enhance collaborative processes. This framework was actually “a deliberative platform for design” in the software industry rather than a design for deliberation but serves as a good basis to start the discussion of collaborative and deliberative connected software. Online collaboration proliferated with the growth of the internet and online communities through the nineties, but there was a noticeable scepticism with which this was treated by some (in contrast to the media furore about the dot com rise) – various authors pointing out weaknesses in the internet model and in particular two concerns: first, the “hypersegmentation” created by multiple channels and the “digital loneliness” that this leads to; second, the creation of enormous quantities of public content – or “masses of gibberish” and “mere chatter”.

In fact, in terms of deliberative content, by 2003 when Beth Noveck wrote at length about the subject, deliberative systems were extremely rare and only a handful (she found seven examples worldwide) of e-participation initiatives existed. She states that “the absence of appropriate technology to transform private conversation into public deliberation is at the root of electronic democracy’s stunted growth“.  Deliberation is a “function of a particular type of structured speech” and in cyberspace the architecture, the code itself, “directly shapes and structures conversation“. To tackle this problem Noveck create Unchat an online, real time (synchronous) discussion tool which enabled a global community of invited lawyers to participate in type-written (as opposed to speech) group conversations. When developing Unchat Noveck discussed the factors affecting, and required for, deliberation offline and developed them into a set of values to which online systems must adhere in order to foster real deliberation. They are listed below, along with explanations of how Unchat design implemented them:

  • Accessibility – Noveck discussed technical accessibility but not issues such as the digital divide
  • No censorship – freedom of speech must be protected
  • Autonomous – allow users to configure the system and set their own rules
  • Accountability and relevance – minimise anonymity to create accountability, but blended anonymity model suggested
  • Transparency – Conversation archives kept and web logs stored for analysis.
  • Equal and responsive
  • Pluralistic and inclusiveness – role-based permissions and moderation, set up to reflect the community
  • Appropriate moderation/facilitation – facilitation is “a clear risk to democracy” but can also improve conversation and teach people to deliberate and participate effectively. Facilitation can occur but flexible models should be available (e.g. moderator can be elected/deposed and can give private or public feedback during facilitation)
  • Informed – data summarized and stored, presented to participants, transcripts available to latecomers to “catch up”. Noveck also hints at the possibility of post-debate content analysis
  • Speed bumps – the navigation system forces participants to be exposed to relevant information before entering a debate, encouraging them to read it and become informed, rather than just heading straight in and talking. Taking this a step further, a quiz was introduced before debate to expose participants to key concepts and arguments, possibly design to target arguments that they have not previously considered.

Of course, there is more to an e-participation system than simply deliberation. Factors such as integration into policy making also influence their effectiveness. Ann Macintosh has written a number of articles about evaluation of e-participation systems producing a framework to be used for the purpose. In 2004 she described ten key dimensions of e-participation initiatives:

  1. Level of participation
  2. Stage in decisionmaking
  3. Actors
  4. Technologies used
  5. Rules of engagement (privacy, registration, site rules)
  6. Duration and sustainability
  7. Accessibility (digital divide and WAI)
  8. Resources and promotion
  9. Evaluation and outcomes
  10. Critical factors for success

A lot has been written about the topics included in Noveck’s values and Macintosh’s key dimensions. James Fishkin wrote about his “Deliberative Polling” platform in 2009, addressing points of accessibility and representativeness, highlighting the difficulties of engaging with representative samples and the effect of pressure groups and lobbyists on e-participation initiatives as well as the deliberative costs of systems that attempt to solve some of these problems without the others. Noveck, and similarly Fishkin, was convinced of the deliberative qualities of synchronous over asynchronous deliberation, due to the time commitment required to participate fully in an asynchronous debate. Cavalier, Kim and Zeiss, in their 2009 paper about the PICOLA project, also preferred the synchronous method of deliberation, claiming that use of new technologies in carefully designed interfaces could replicate the level of deliberation of face-to-face conversations. Noting the scheduling difficulties of synchronous participation, the group combined this tool with asynchronous discussion areas to create a 24-hour platform for participation. Tucey (2009) described weaknesses in synchronous models such as difficulty in expressing an opinion due to speed of conversation and suggested a hybrid strategy in which highly engaged groups might interact synchronously but with limits to their frequency of posting. Tucey, like Noveck, also advocated breaking large groups into smaller ones (up to 24 people) in order to replicate the deliberative quality of face-to-face conversation. Both Tucey and Fishkin highlighted the importance of repeat interactions between participants, perhaps a requirement to discuss issues weekly for several weeks, in order to help them get to know each other and understand their contrasting ideas. In fact, asynchronous debate ( in the form of bulletin boards/messageboard/forum systems) has dominated internet participation. Ann Macintosh wrote in 2004 “Typically e-engagement is based on discussion forum technology” and described online communities based upon discussion forums as examples of empowerment but described how e-engagement initiatives of this form imply that an indication of level of agreement with proposals is sought. Tucey (and others before her such as Wright, Coleman) suggested that moderation can help to increase deliberative conversations in groups but can be impractical at times and should be tailored to the group involved. Scott Wright wrote about “The necessity of moderation” in 2009, citing Kearns et al., Barber and Blumler & Coleman when describing how moderation, and indeed facilitation, can be vital in turning the uncontrolled expression of free speech into more focussed and useful discourse. Wright stated that moderation was justified as the anonymity and physical separation allowed by the internet causes behaviour that requires moderation. He described two models of moderation (as well as the third model of no moderation): content moderation, in which humans (and also possibly automated programs) pre-moderate content against pre-defined criteria, and interactive moderation, in which the moderator acts as a facilitator, giving feedback, supplying resources and directing the conversation in productive ways. Wright also illustrated the problems of poorly designed or implemented moderation strategies and the necessity of distinguishing legitimate from illegitimate moderation by highlighting the problems that the latter can bring, particularly for governmental platforms where the issue of censorship may be raised.

I was particularly struck with the  2007 paper by Schlosberg, Zavestoski and Shulman, which analysed a number of online initiatives including e-rulemaking and showed how online systems can be just as deliberative, though not more than, offline methods. They described with clarity how internet systems can either constrain or promote deliberation and showed how functionality and interface design can be an important factors in deciding how deliberative input will be. Scott Wright and John Street also wrote about the importance of design for deliberation in 2007, illustrating how design determines deliberative quality, but can either facilitate or impede it. They stressed that they were not describing technological determinism, partly as the technology was not a guarantee of the use it would generate, but also because the technology used was often the result of political choices made when the system was designed and commissioned. It was these choices, they argued, that were influencing the quality of deliberation rather than the technology itself.

Scott Wright’s later work stressed the importance of looking for trends across platforms and not focussing just on new innovations and technology as well as the importance of less “institutional” discussion fora. Many non-political sites host somewhat deliberative content and Scott has stressed that these “third spaces” are potentially an untapped resource of political opinion, or at least a model of engagement that could be used as an example in future work. This led me to think about the real strengths of the internet and where its potential lies. There is much debate about the usefulness of online platforms for deliberative purposes. For sure, small scale deliberation can take place on specialist platforms and deliberation occurs to varying degrees in asynchronous fora. But how do we effectively design for representative engagement and deliberation on a very large scale, harnessing the greatest strength of the internet: the ability to connect and promote interaction between large, national and even global communities? As Scott Wright showed, there are communities out there in cyberspace, willing to discuss politics and even deliberate. How do we engage those people in constructive ways? Beth Noveck has shown clearly with her peer-to-patent platform that online collaboration is possible. Increasingly, numerous web platforms offer commenting services that prove to be wildly popular, if not deliberative. So how do we turn this potential and enthusiasm into integrated, deliberative political participation?

Aside from the problem of recruiting participants and keeping their attention and faith, a possible method for maximising the usefulness of their input may lie in structured argument visualization (AV) interfaces which combine many of the technologies, designs and principles of the systems mentioned here. Ann Macintosh has written a number of papers describing how computer supported collaborative argumentation (CSCA) or computer supported argument visualisation (CSAV) (possibly based on IBIS) could be utilised to provide graphical representations of arguments to enable better deliberation. Simon Buckingham-Shum has written a number of texts describing the problem of knowledge representation and management and the potential of technology to provide platforms for successful visualisation of knowledge and argument in public participation decision making and planning systems. He has been involved in a number of innovative public-participation initiatives including the development of tools such as Compendium, a “hypermedia and sense-making” tool used to structure and represent contents of public planning meetings which can be used to inform web consultations (and vice versa).

    Argument visualisation has been shown to be of great potential in a number of policy making situations. But there are parts of these models that can be looked at in more detail. Presenting information and challenging, even educating, the participant is important and the visualisation techniques required to help participants make informed contributions are vital. However, providing a platform on which they can contribute deliberatively, to interact and work towards a consensus is also vital. Allowing reciprocal and networked input is vital for deliberation, though structures to allow this on a very large scale are few, if any. Furthermore, large-scale contributions need to be analysed and integrated into decision making. The list of values above is applicable to every part of these AV systems and there is room for research into how interface design, social technologies and content analysis can be best combined to produce effective very large scale deliberative systems.

    Posted in Literature, PhD, Research Notes | Tagged , , , , | 2 Comments

    Dr Simon Smith: Online research – analysing forums

    The latest PhD seminar featured Simon Smith presenting to us two pieces of research that he had worked on: a health management project in digital inclusion and an international local e-democracy project. Both projects involved the use and analysis of online forums but differed in the communities of participants involved and the methods employed to facilitate their use. The two projects presented interesting, and contasting, challenges to research which Simon presented under two broad themes: ethical issues and validity of claims about forum content.

    The first example was a piece of action research in which a community of older people with a particular illness were encouraged and facilitated to manage their illness using online tools provided by the researchers. The participants were encouraged, in a very hands-on way, to form online discussion groups through the use of free handouts of computers and connections and real-world focus groups to initially ease the participants into interacting virtually. The online environment consisted of forums that were open initially to small groups of users and later the entire group. There was also an instant messenger (IM) service which the researchers assured the participants would not be monitored or analysed. The researchers analysed the content of the forum messages for information such as “self reported health” (in which they found positive trends as the illnesses were more effectively managed, though GP visits initially rose as better informed patients sought information). A full suite of web analytics was also employed so that usage trends could be monitored as well as the discourse analysis.

    The second example was an ethnographic study built upon a previous EU e-participation strategy, concentrating on local issues discussion forums in the Czech Republic, Slovakia and the UK. These forums consisted of user created threads (not seeded by councils) and the researchers monitored these over the long term to build up a picture of the people contributing and the topics covered using methods such as “qualitative meta-reading” or keyword searches. Issues such as social inclusion, diversity, local or collective identity were investigated, high volume users were identified as well as the level of intermediation (people representing groups). It was found that, although the users did not constitute a representative sample, socially marginal groups did have a voice through people voicing opinion on their behalf. The researchers also investigated how the topics discussed by contributors compared to topics covered in local media, council minutes etc, to identify differences in topics reported and topics discussed by locals. Interestingly, they noted that many of the topics discussed on the forums went on to form discussion topics in council meetings.

    Discussing ethical issues raised by the two studies, Simon highlighted the uncertain boundary around online social data and the ethical duty implicit in reporting of it. The health management study was set up as a medical intervention and operated under informed consent. However, the discussion area was specifically designed for inexperienced users and the final structure was not exactly known at the beginning of the study so for this reason the researchers needed to address concerns by negotiating privacy terms throughout the study, rather than relying on the catch-all agreement made at the outset. The researchers also defined boundaries of privacy, signposting private areas, such as the IM service, that were not monitored or analysed. Further ethical issues arose due to the fact that full web stats (log files containing IP addresses etc.)  were also collected for analysis, a step possibly not understood by the participants. Simon explained that the researchers worked on a principle that they should not “exceed reasonable expectations” in terms of personal details designed their research accordingly. They even took the step of presenting preliminary drafts of reports to the participants for comments before publishing. The ethnographic study presented slightly different challenges. The participants were discussing public issues on a public forum but the local focus of the environment made it difficult to asses what people considered to be public and private. People were also able to post anonymously, giving an air of protection against identification. Both studies highlight the importance of considering what is quotable in your research and what is not. It is important to look at how people are using an online environment and developing norms (just like in “real”, offline environments). For instance, the difference between the in perception of privacy of IM due to its assumed ephemeral nature and that of a publicly archived forum. Finally Simon discussed an interesting view of the ethical use of online data when considering authorship issues and intellectual property rights. Researchers often try to avoid privacy issues by anonymising data but we need to consider the filp side of this: do we need consent to use a persons contribution to a forum? Should we be citing their name? The specific environment used must be considered before deciding where ownership of content lies.

    Both of the studies described presented findings about the content of the forums and conclusions drawn from it. Simon addressed the issue of validity of conclusions drawn from data from discourse/content analysis. Technology is sociologically constructed  and its use is socially mediated – we don’t harvest any details about the rest of the persons life and it is is often impossible to contextualise their opinions in terms of their individual situations. The knowledge upon which a contribution is based is not neccesarily readily apparent. However, online environments do lend themselves to the collection of sociologically rich data as people may be less inhibited (perhaps if shielded by the cloak of anonymity)  with regard to contributing personal details. Participation is recorded in its entirety, in situ, and contextual knowledge can be included as the participant takes time to formulate a response, adding details and links to illustrate factors that have helped to build a viewpoint. Falsehoods may be rarer as they stand as a record to be challenged by large numbers of online participants.

    The validity of conclusions drawn from online discussion is particularly pertinent to my own research. Any e-participation system designed to harvest public opinion must be designed in such a way to ensure that accurate contributions are solicited and the collective opinion of the virtual online community compares as closely as possible to its real world offline alternative. Indeed online systems can go beyond their offline equivalents to produce a method of soliciting opinion that is free from the factors that can degrade its quality. Structures have been built in to a number of e-participation systems for just that purpose. In Beth Novak’s  peer-to-patent system, communities and tasks are organised in granular fashion with clear goals and expectations to ensure that a task is completed in efficient manner and a reputation based system ensures quality comment from trusted individuals. creates space for debate that is strictly structured into chunks of contributions, for or against an argument, which can be voted upon in an attempt to create a consensus of opinion. Innovative designs such as these will be crucial in the development of more effective e-participation solutions and investigation into them will form a crucial part of my research.

    Posted in PhD, Seminar | Tagged , , , , , , , , | Leave a comment

    Supervision meeting three

    After my meeting with Ann, Stephen was aware of my difficulties in projecting a structure of research that is worthy of PhD. I talked about the areas I had worked on: case studies; different technologies and ways of looking at e-participation; conversation mapping for analyzing content; argument visualization; interface design and its effect on input. We then started a discussion back at the basics of what is needed for a PhD and what I wanted to investigate: deliberative content? efficacy of an initiative? success of a project (could be in terms of attracting an audience, as well as other factors of success)?

    These questions influence the different types of research questions that might be generated: How does interface design effect participation? How does interface design affect deliberative quality? How can we visualize or assess deliberative quality? How can large scale arguments be visualized? These questions should be formulated based upon my interests, as well as a gap in the research.

    Stephen highlighted the potential weakness in this approach: is it actually a sociological question? Do opposing groups deliberate less and resort to “flaming” or other polarized techniques, whereas closer communities deliberate more easily? Are more cognitively challenging subjects more or less likely to create deliberation?

    I described how I wanted to find ways of summarizing input, helping to create useful data from the mass of large scale social input to a participation platform but described my concerns that the analysis of data using conversation mapping and argument visualisation is “muddied” by the effect of interface design on the quality of data. Stephen remarked that the questions relating to data analysis/visualization are therefore separate from questions about interface design, but postulated that we could link the two. Interface is structurally determinant, has an effect before the conversation is started whereas visualization happens post discussion. Inquiry along the lines of “Integration of design and analysis technologies to create successful e-participation initiatives” could be a 2-3 stage process:

    How do you design to allow for deliberation?

    • Examples of participation initiatives
      Including GIS pParticipation, online forums, voting systems, etc.
    • Literature about interface design – HCI, usability, accessibility as well as social science studies of participation

    How do you analyse and visualize conversation?

    • Conversation map
    • Sentiment analysis
    • Semantic web / Web 2.0 methods such as word clouds etc
    • More traditional methods
    • Argument visualization (could form third stage, below)

    How do you create a platform structure for deliberation that allows adequate post-conversation analysis to take place?

    • Integrating the previous two stages
    • Blueprint for successful design
    • Potential prototype

    With this in mind we discussed the following action points to be undertaken:

    1. Produce a literature review of materials relating to interface design and participatory systems
    2. Produce very rough thesis chapter structure which Stephen envisioned as having an introduction, 2-3 chapters about interface design, 2-3 chapters about discourse analysis and further chapters about integrating the two.
    Posted in PhD, Supervision | Tagged , , , , | Leave a comment

    Back to the drawing board…

    I decided to run some of my thoughts past Professor Ann Macintosh – an expert in argument visualisation and one of the most knowledgeable people in my area of research. Putting to her my thoughts about interface design and its effect on e-participation platform effectiveness, Ann’s immediate reply was to ask “Why is it a PhD and not just something a consultancy could do?“. She was right of course – my ideas had become a little limited in scope and were concentrated a little too much on the practical. Expanding my thoughts, we talked about what I wanted to do: create a method for generating useful information out of the mass of content often contributed online; create a platform to allow users to be fully informed and develop and illustrate useful ideas; evaluate e-participation strategies to discover areas of strengths and weaknesses that could be used to provide improved services in the future. Talking about analysis of content Ann stressed that linguistic analysis is very hard and  it has taken years of research to get to the level I am proposing. Acknowledging that I do not  have expertise in linguistics I described some of the examples where I thought solutions could be created from the “building blocks” of previous research and Ann agreed that building on others work and combining technologies is do-able. Building a solution is good, but needs to be realistic.

    Argument visualisation is one area where there is previous work that could be built on and as we talked about ways to investigate the interests that I had outlined, such as evaluation frameworks for e-participation initiatives, Ann highlighted it as an area on which I could create a solution to be used to evaluate and enhance previous work. Looking at a research question such as “The appropriateness of argument visualisation in evaluation of e-participation platforms” I could create a solution to be tested in empirical works with focus groups and against previous evaluations.

    Going in to the meeting I had a range of ideas and a strategy for research that was becoming troublesomely unsuitable to a PhD. Coming out of the meeting I had a clearer understanding of where things were going wrong but I needed to have a real think about the steps needed to rectify the situation and get my research plan back on a firm footing.


    Some notes on an idea in the resulting thoughts:

    “Role of technology in enabling and evaluating e-participation”

      • Designing for deliberation
      • Developing systems for argument mapping
      • Analysing content

    Developing tools to analyse and evaluate e-participation technologies:

    Tools to ENABLE deliberation/participation

      • Evaluate interface design
      • Create innovative platform (blueprint/prototype)

    Tools to ANALYSE deliberation

      • Evaluate input to platforms
      • Summarise input to platforms

    Tools to VISUALISE argument/process

      • Enhance tools for argument visualisation
      • Evaluate use in consultation/participation initiative

    Examples for thought:

    “Take one or more initiatives and analyse in a novel way to show value of new method”
    Approach used by Ricky Ohl in his PhD (Knowledge Cartography, 2008, Buckingham-Shum et al, Ch3)

    Create an evaluation technique and evaluate different platforms

      • Deliberation assessment
      • Integration assessment
      • Summarising
    Posted in PhD, Supervision | Tagged , | Leave a comment

    Developing ideas…

    Having looked at a range of case studies and read some of the literature I have tried to formulate some ideas about my research, starting with the major themes that must be present in research about e-participation.

    Participation: allowing the public to be involved in decision making and to have their voices heard (and acted upon); helping to educate the public about issues, providing resources and promoting informed opinion.

    Deliberation: social and reciprocal conversation between individuals allowing them to illustrate viewpoints and form opinions; can help groups of individuals to reach a consensus.

    Collaboration: allowing individuals or groups to directly contribute to a process (in this case policy or decision making); helps to build solutions that represent public views; helps to improve government by harnessing the relevant skills and resources of the public.

    Technology: can be used to help create informed opinion (e.g. using argument visualisation); can analyse the level of deliberation in content (e.g. conversation map); can influence deliberation (interface design); can enable collaboration (web 2.0, the semantic web).

    The thought that weighs most heavily on me when considering the above themes is that of interface design and its influence on the amount of deliberation, participation and collaboration allowed by a platform. How much of the quality, or lack of quality, found in e-participation efforts is down to technological determinism? Of course, many other factors are also involved: the communities attracted; the subject matter of debate; the resources and information supplied in support of a debate; the institution used (government, “third space” or “citizen generated” platform).

    It seems as though, in order to truly analyse the technological interference of platforms on their contents one would have to take into consideration platforms that are comparable and contrasting examples in each of the above categories to get an accurate picture – national/EU government and local government platforms, community groups (geographically localised or vocation/age-specific), citizen spaces and those that Scott Wright calls “third spaces”. Comparable examples in these contrasting categories would need to be analysed for interface and structural characteristics, deliberative and collaborative nature of content and their level of integration in policy making as well as their accessibility and representativeness.

    Posted in PhD, Research Notes, thoughts | Tagged , , , , | Leave a comment

    e-Participation case studies

    Taking an in-depth look at a few more case studies, my initial list of good examples is dwindling fast. Many have been discontinued and the data is not readily available (TalkSwindon and BBC Action Network, for example) and some are woefully underused (such as HighlandLife). Three more examples are worthy of note here though: Ask Bristol, a local government example of consultation and e-participation; the Communities and Local Government Forum, a UK scoped moderated forum; HeadsUp, a UK-wide youth-oriented debating site.

    Bristol City Council has a long record of innovating with regard to public online participation. The website lists their offering as Viewfinder Bristol but the latest incarnation seems to be AskBristol, a site that allows BCC to converse with the public via a forum based platform. Single debates are held at a time with results of previous debates scrutinised and published on the site. Information in the form of text, data, webcasts etc. is presented about different threads within a debate and a space for public contributions can be found beneath. Comments are moderated against a set list of fairly standard rules (e.g. “Don’t be too offensive…”) and a ratings system allows users to show support for comments they like and opposition to those that they don’t. Most popular comments and ideas are easily viewed via a simple rollover. The content is not particularly deliberative, there are no explicit social features of the interface, ratings aside, but there does seem to be a level of integration of content into policy and a good level of feedback. Collated reports are available in pdf format showing responses to each “idea” submitted and summaries of public responses as well as details of how the ideas were integrated into policy. There are also links in each page to other methods of communication such as email, links to surveys or details of offline public consultation. It’s also interesting to see that there is a facility to comment on how the council could engage with the public online better!

    The Communities and Local Governments forum is an interesting example of a simple platform as it is heavily moderated – nothing is published without prior approval – and moderation only occurs in office hours. This has the result of limiting active participation, other than individual comments, to those times. The system is not particularly interactive but does allow input to be structure as a reply to previous input and allows discussion threads to be “tagged” by users to help categorise the conversations. The content is surprisingly deliberative for such a strictly controlled environment. It would be interesting to analyse the ethno/demo/sociographic makeup of the community of users to see what effect the barriers to use (moderation and “opening hours”)  have on the participation. It would also be interesting to find out the effect of the moderation on debate. It can only be assumed that moderation silences certain voices but the effect to which this improves or degrades the debate is unknown.

    HeadsUp is an interesting site for youngsters that contains a lot of information about how to debate as well as reasons to, and how to, contribute to debate about society. The site employs a simple messageboard platform to debate one theme (with sub categories) at a time. Other than customisable avatars, the system does not provide explicit social features (such as reciprocity) but it does have the novel feature of a board of “heads” – facilitators and moderators that have a presence on the messageboard of each discussion to help the debate move along. The debates seem to be slow to build but I need to investigate further as the current debate is new and closed debates do not seem to be readily available for scrutiny, though in-depth reports do provide a good summary of content.

    Posted in Case Study, PhD, Research Notes | Tagged , , , , , , , | Leave a comment

    e-Participation initiatives uncovered

    My initial investigations into potential case studies and examples of e-participation initiatives has proved very fruitful indeed. I have been ably assisted in my trawl of the hundreds of examples currently available by the website – a useful database of e-participation initiatives in the UK, Germany and EU. Aiming to “show diverse developments and highlight examples of good practice” as well as trying to “encourage people to join one of these projects or to start their own“, this site is the perfect launching pad for my work. The site allows project teams to upload details of their own e-participation initiatives and allows users to tag and rate the projects. I have been able to look at a number of projects so far, with some posing great potential. There are locally targeted projects, such as TalkSwindon and HighlandLife, as well as UK national initiatives like the BBC Action Network and and EU forums such as There are initiatives targeted at specific communities, too, such as LondonTeens, the Northern Ireland Youth Forum and HeadsUp. All of these examples, and many more warrant close inspection, a process that I am currently engaged in. I am hopeful that a range of case studies will prove to be suitable to the kind of research I have in mind. I am looking at the structure of the content, as allowed by the interface of the web site used to present the initiative as well as topic of conversation and community behaviour.

    Another interesting example that I stumbled across is the UK Police initiative to harvest ideas about cost cutting and efficiency savings. Aimed at internal personnel but available to all via the internet this site has a similar structure to recent government initiatives such as SpendingChallenge and YourFreedom websites in that it allowed participants to publish ideas and comment on the ideas published by others. Like the other initiatives, the system suffered from submission of spam postings as well as inflammatory, and possibly deliberately overstated,  comments and at first glance a lack of deliberation as comments seemed to be individual statements of opinion rather than interactions of discourse. Unlike the the other initiatives, the police review seeded conversations with initial questions and did not allow voting on comments. The example could prove useful if comparable systems are identified as it was aimed at a particular user group which may behave in contrast to the general public when participating.

    Following on from the thoughts I had about Scott Wright’s research about “Third Spaces“, I researched a number of such for comments and interaction facilities. One particularly interesting example was found at where it looked as though the facilities for discussing products had been adapted to become general discussion boards, categorised into a number of different themes, including a very well used politics section. The interface was designed to allow interaction through reply facilities and the social “path” of a conversation was recorded through username, date of posting and previous message responded to. There was also a facility to mark whether a post “adds to the discussion”, a measure of usefulness of contributions and, in aggregate, quality of the debate. These marks are also recorded against a user profile, raising the potential of a profile “respect” model. On the whole, the parts of the forum that I looked at seemed remarkably deliberative and discussions existed about particularly topical themes (for instance “Will a degree bring graduates enough income increase to pay for the degree?). It will be interesting to see if these “third spaces” can provide successful models for deliberative interface design or whether other factors are involved in fostering deliberation, such as the communities drawn to different platforms.

    Posted in Case Study, PhD, Research Notes | Tagged , , , , , | Leave a comment