I am continuing my exploration of common standards and have been looking at those that can be applied to the provision of services online. Here are some notes.
As part of their “digital by default” agenda the UK Government has defined a standard against which the development of government services provided online should be judged against. The Digital Service Standard is a set of 18 criteria to help government create and run good digital services. These range from the essential “understand user needs”, through a commitment to using agile development methods, and an encouragement to use open standards and open source, to the government specific “test the service from beginning to end with the minister responsible for it”. The details of the standard can be found online as part of the Government’s Service Design Manual here:
Using the UK Government’s Digital Service Standard as a starting point the LocalGov Digital group have created a similar standard designed to be applied to UK Local Government services. The details of this standard can be found here:
Obviously, many of the principles that these standards seek to articulate can be applied to the design of online services and applications beyond the government sector. Large organisation can seek to embed such standards within their governance structures and processes. But a more basic level I think the standards are a useful checklist against which to test and assess services as they are being developed. If your work tends to be about finding technical solutions to specific problems, attempting to apply such standards are a good way of forcing you to step back and see the bigger picture.
It is to state the obvious that data is about things. But I am often finding that I have to ask myself precisely which things?
The creation of data models is an integral part of software engineering. They are used, particularly in database design, to understand and map out the structure and characteristics of the data you are working with. Traditionally there are three types of data models that can be applied to a given system; each building on the previous one and increasing in complexity. These are:
the conceptual data model,
the logical data model,
the physical data model.
The conceptual data model is the highest-level description of the data used in a given system and is the one most closely concerned with matching the data to the actual things that the system is concerned with. This data model will identify concepts and the relationships between them. For example, in a business accounting system the model may identify that the customer (the first concept) has one or more (the relationship) invoices (the second concept).
When doing WordPress development I am often attempting to develop a conceptual data model of the needs of a particular website so that I can determine how many custom post types I need to create. In my current work with open data I find that I have the need to clearly conceptualise the actual thing that any new set of data is about in order to create an appropriate schema for it. I think I have got pretty good at this type of modelling and have become convinced of its benefits as a problem solving tool beyond the realm of building databases.
I have also become aware that because the things that software and web applications are working with are often the same or similar — people, events, transactions, content items and so on — this opens up the opportunity for people and organisations to work with shared conceptual models and even for the development of common standards.
There are three aspects of this that interest me and that I would like to explore a bit further:
Obviously shared conceptual data models help to underpin the development of open data. If you want to create data schemas that enable the widest possible sharing of data then it helps if they are describing data that relates to clear commonly understood concepts.
Thinking about common conceptual data models should also help me to improve my web development work. Obviously when trying to conform to web standards and best practice I am, at a number of levels of abstraction, already working to widely shared conceptual data models. The semantics of HTML5 for example. But I think I can take this further. Given that I am regularly making use of similar post types and page elements across different projects, I should be able to create a standardised conceptual model for Grit & Oyster and use that as a framework to help me write better and more efficient code.
Software engineering will primarily think about conceptual data models as they apply to individual systems. Common standards, such as web standards, will require conceptual data models that are global or generic. But there is the opportunity for standardisation of conceptual data models at the level of the organisation which could be hugely beneficial.
I have come across a number of initiatives that attempt to develop standard models or are in part dependent upon them. These I will document below (and add to when I find new ones):-
The schema.org project to develop vocabularies for structured data on the internet, because of the broadness of its objective, has created an implicit conceptual data model, or more accurately a number of interlinked models, which I think acts as a good starting point for a developing models for web applications.
You can see the full schema.org hierarchy of types (of things) on their website but the top layer is as follows:
Smart City Concept Model
As part of the standards strategy for smart cities in the UK promoted by the Department for Business Innovation and Skills (BIS) the BSI (the British Standards Institution) has developed the Smart City Concept Model. This standard (PAS182) defines a series of concepts that describe the things that are typically contained in city data.
“The model is relevant where many ORGANISATIONs provide SERVICEs to many COMMUNITYs within a PLACE.”
Local Government Business Model
The Local Government Association (LGA) in the UK has developed the Local Government Business Model (LGBM). This is an attempt to define the elements of public sector service delivery provided by local government.
The model is fleshed out with a number of standardised and interlinked lists that can be found on the esd standards website.
The Open Data Institute (ODI) together with the Local Government Association (LGA) has developed three data publishing and data standards learning modules for Local Government.
These online courses include information to support local authorities to publish data, improve the quality of data, and use common standards to be able to more easily share, combine and compare data for further use and analysis across authorities. Including case studies from councils demonstrating the benefits of data publishing and the use of standards.
On Monday 25th April 2016 I attended a Local Government Association (LGA) event “Making Data Standards Work” which through a series of presentations explored the role that common standards for data can have in improving the performance, effectiveness and accountability of local government.
Given I have a background in web development, an activity that wouldn’t exist without technology that conforms to standards, it should be obvious that I do not need convincing of the importance of working to common standards. Yet my recent work with open data for a local authority has reinforced this view and given me a greater understanding of how a standards model can be applied to a range of other activities and sectors. It has also deepend my commitment to the importance of open standards.
So I was interested to hear at the event about how standards are playing a role in the wide agenda of local government. From the various presentations I was able to pick up a number of specific ideas and to get a useful broad view of the different bodies and organisations involved in this area of work. I think the main insight that I came away with from this event (apart from the general level of geekiness that such a topic generates) was how sigificant a role that developing a standards based approach can play in service transformation. There were a couple of really neat examples of how this was happening in practice – but it was also obvious that UK local government is only really just begining to recognise the power that such an approach can have.
Here are links to two services I’ve begun using to help understand and manage the followers associated with my Twitter accounts. They help you to analyse both who is following you and the accounts that you are following. This means you can then do various housekeeping tasks such as making sure you are following back those following you or unfollowing inactive accounts.
This is a simple service that divides your Twitter connections into three types; following: those you are following but who don’t follow you, fans: those who follow you that you don’t follow back, and friends: those with whom you have a mutual connection. It also works with Instagram and Tumblr.
This is the more advanced service and breaks you Twitter connections down using several different measurements. For example it looks at the ratio of followers to followed for each connection or can tell you who are the most ‘talkative’ accounts. It can also identify fake or spam accounts. I’ve found particularly useful the list of inactive accounts it provides.
Both services are free but have premium options for power users.
Since the beginning of 2015 I have been doing some work for a London borough helping them develop their approach to open data.
As part of a wider agenda to make local government more open and transparent councils are being encouraged to publish some of the non-personal information they gather and use as open data. This can be something of a challenge both in terms of the practical implementation of the mechanisms needed to do this and the cultural change needed to see such an activity as valuable.
I’ve been working on the practical side of things. Helping to develop systems and processes that can be used to meet this particular organisations ambitions towards open data but also ones that work within the constraints of time and resources available.
It has been interesting work and I’ve begin to develop a real feel for the wider open data agenda, as well as also seeing where many of the issues and frustrations of such a new field are arising. It is a field that I think I can contribute to and so am planning to try and develop further expertise in open data. Obviously, my starting point is to approach it from a local government perspective but I am already getting interested in some of the wider issues. Naturally I am also starting to look at how my WordPress development skills can be applied to this topic.
As I delve further into open data I will be writing up notes and discoveries here. Follow the open data category find my posts on this subject.
I’ve been working on a project where I wanted to include a template file to handle the display of a particular WordPress custom post type. However, I wanted this template to be included in the plugin that created the custom post type and not in the theme. The plugin adds the post type to one site, and one site only, within a multisite network, so I didn’t want to clutter up the folder of the theme which is used on other sites that don’t require the plugin.
So I looked for a way to do this and discovered the role of template loaders within plugins. Including a template loader in your plugin allows you to associate a template file in your plugins folder with a filter hook or a shortcode. But the great advantage is that it replicates the behaviour of the get_template_part() function — which means you can override the default plugin template file with a custom file in a child or parent theme. (Obviously I didn’t need this in this case but it is an example of the good practice of ensuring that plugins and themes are not dependent on each other.)
Last month saw the announcement of the closure of another online service that I have been using. This time it is Readmill the ebook service that consisted of a reading app for iOS and Android devices and a social network for sharing the reading experience.
I am not as upset with Readmill’s closure than I was with Editorially as I was only really using a part of the service. I wasn’t particularly interested in the sharing and community aspects. The reason I signed up was that I wanted an app that allowed me to read and organise my ebooks, that provided a good user experience, and that was an alternative to Apple’s iBook app and the Kindle/Amazon service. This I thought Readmill did well.
The ebook market place is currently not as open as it should be with two dominant players and the confusion of proprietary formats and DRM implementations. I am concerned that Readmill’s demise hasn’t helped this. I also now need to find an alternative solution.
Our team will be joining Dropbox, where our expertise in reading, collaboration and syncing across devices finds a fitting home. Millions of people use Dropbox to store and share their digital lives, and we believe it’s a strong foundation on which to build the future of reading. We’re delighted to work alongside this talented team and imagine new ways to read together.
Dropbox is a sustainable business and has considerable clout. If as this suggests it is looking to do more to develop features for ebook readers then I would welcome this. It will be interesting to see if anything comes of this.
We’re proud of the team and tool that we built together and incredibly thankful that so many of you were willing to give it a try. And we continue to believe that evolving the way we collaborate as writers and editors is important work. But Editorially has failed to attract enough users to be sustainable, and we cannot honestly say we have reason to expect that to change.
I haven’t been a particularly heavy user of Editorially, but my use of it had been growing and it was gradually becoming an important part of my workflow for some tasks. I’d been planning to try and make more use of its collaborative possibilities in the future. So its closure on May 30th is a real disappointment.
I thought it was a really good example of an elegant user interface and was a pleasure to use. So it is distressing that such a well designed tool has failed to become sustainable.