Wednesday, March 4, 2009

Some Thoughts on Healthcare Web Strategy

What is a good technology strategy and how does it relate to a good business strategy? How does one determine which technology initiatives deliver the most bang-for-the-buck? Everyone knows the proper application of IT can save money by automating the most labor-intensive processes, by streamlining inefficiencies, and it can improve services and make money by bringing and keeping more customers. To fit our organization’s more altruistic goals, it can improve medical care, for example making the referral process faster, and it can offer patient services that enhance overall care.

What follow are some examples web technologies and how they can be of use. This is not what we “ought” to do, but is meant to illustrate some of the issues and factors to consider when developing a web strategy.

The New Yellow Pages

What doctors specialize in my child’s condition and practice within 20 miles of my home? Now only show me the ones with an available appointment time this week. This query is based on a new search technology, less than ten years old, but growing rapidly in widespread use. It works by transposing graph patterns, but it can be thought of as traversing the relationship from condition to specialty, specialty to doctor, doctor to clinic, and doctor to schedule. This does not replace relational databases or traditional keyword-based web search, but it does provide a new and very useful way to query data. Some data such as schedules, location, and contact information can all be retrieved from existing systems, some will need to be created and maintained. For example, how does a doctor tell the system that he practices at a certain clinic?

Using the URL-as-URI data model (Berners-Lee, 1999), he simply logs in, opens a web browser to the clinic home page, and clicks, “I practice here.”

Successful IT initiatives are evolutionary, not revolutionary, and what has been described so far is something that can be built on top of software currently in use at LPCH. It utilizes existing web content, data stores, and applications, and adds some new tools developed by HP Labs and Stanford Bioinformatics. Rather than wholesale system development, it would be constructed by adding a sequence of small feature enhancements to the existing physician search (Java version). For example, first the links between doctors and clinics (and specialties) provide a two-way reference and additional web content for each, and vCard support can provide the distance calculus and input for map APIs. Each of these is a useful feature in and of itself. If later combined with the vCal API to Microsoft Outlook or Cerner scheduling, one could find matches within a set of preferred appointment times. This is a natural growth of, and not a replacement for, technology that already works.

As a part of a larger enterprise directory, this can be combined with a variant of the FEA-RM enterprise architecture ontology (Allemang, et al, 2004), which uses a set of polymorphic nested containers, more or less independent sub-trees inheriting one of two transitive properties: comprises and comprisedOf. If A contains B, and B contains C, then A contains C and C is part of B and A. With those rules available, all a manager should have to do is add a contractor or new employee to her group, and this person is automatically also a member of the correct cost center, department, and organization, instantly appearing in all appropriate directory listings.

Information Transformation (the perspective you need)

A doctor, a nurse, and a patient all look at the same set of data, but not in the same way.

The broad goal of IT in any organization is to get the right information to the right people at the right time, and this role is even more critical in the medical field. Therefore we must take a close examination of the transformative properties of web technology.

The web is ultimately about the serialization and de-serialization of data. That was its original design intent, and that is the essence of its power today. Serialization is defined as the transformation of data from within an IT system (a database or application, for example) to a transmittable document (say, a web page or XML message). With this S/DS comes transform. There are many transforms and many data markup standards on the web today. So much, that in web systems design, build-by-restriction is as important as build-by-feature (Fielding, 2000). There are many possible serializations of a given data set, and engineers choose the one they think is best for the purpose at hand. The key to good design is selecting the most efficient transform between your data set and each of its intended uses.

An effective serialization must be designed in the context of those two data structures. In the case of patient data at LPCH, the source data structure is defined, with the Cerner EMR being the “source of truth.” Likewise, one serialization target, namely physicians, is also defined as it appears in the Cerner physician-facing UI. MDPortal, being targeted at physicians, mirrors this closely, with the patient census list being the only notable departure from the source data structure. Another serialization target, patients and families, is not so well-defined. Many PHR systems mirror the traditional data stores of healthcare, which may be deftly used in the hands of professionals, but become unwieldy to the average person. The ideal patient/family care workflow – and the most effective data structure transformations to facilitate that workflow – is not a well-solved problem industry-wide. The first team to solve it will gain a sizeable market advantage and remove one of the last barriers to widespread PHR adoption.


Much has been made of the advent of the social network, and it is indeed a significant development. So much so, that it becomes important to look at other times in history when new technology has facilitated, changed, and reshaped the nature of communication. The core principles of the past were not in fact overturned, and those who succeeded did so by adapting those principles to the new environment. The new media also tends to adapt itself to the existing world, as is the case with the size of a standard hardcover book or the shape of the iPod.

This time, the new tool is the relationship graph, which differentiates itself from group membership as well as traditional broadcast communication. As the traditional media broadcast goes to as wide an audience as possible without regard to relationship, membership, but possibly with regard to location, the social network sends communications to friends and friends-of-friends, regardless of location or other factors. Relationships can be friends, former patients, donors, employees, or anyone on an opt-in basis, but to a computer they are simply connections. Social networks store these relationships as graphs, and communications travel outward along these connections, usually no more than one or two Bacon Units (BU) away, a BU being the Erdos-Bacon number of an audience member (Milgram, 1967).

The most effective communication strategy will use all the available tools in concert.

It is also important for a healthcare organization to remember what information is under control, and what is not. The typical Web 2.0 business model – and this includes social networks – is to collect user information, most commonly but not always for advertisements. What happens on our servers is under our control, and the information can be archived for auditing, or it can be destroyed for confidentiality. User activity that happens on hospital servers (or those of business partners) can be kept private, activity “in the wild” cannot. This includes conversations, group memberships, activities, applications, and click-through. While social networking is shaping up to be a powerful communication tool, I keep to a standard rule of thumb: Don’t say anything on Facebook you wouldn’t say in a crowded room with the advertising industry’s brightest young analysts standing next to you taking notes. Also they have everyone’s email address.


A lot of documentation is (or should be) created during the lifecycle of an IT project. Even the most detailed conversation about an architectural software model is no substitute for a simple UML diagram. From the first inception phases, everything must be documented, and written for an intended audience who might not have you there to explain it. Upon project completion, this documentation should tell a story, and the title of that story, “How We Built This Thing (so you can too),” from which operations manuals can be distilled. Along with the deliverable working system, a complete development and testing environment must be provided. The artifacts produced in a normal project lifecycle should lead naturally to these results.

“A problem well-stated is a problem half-solved.” Everyone understands the importance of good requirements. However, there is more than one way to present a given set of requirements. The most effective way to state requirements often depends on the technology used as well as the business need. An iterative process, starting with a first order approximation of business needs and technology capabilities, followed by refinement and convergence has proven successful in the past, and will continue to be successful in the future.

No comments: