Friday, December 4, 2009

Upgrading my IDE

Ah, upgrade time. Always interesting to see exactly what plugins and libraries I have installed and then making sure they're still installed afterwords.

Eclipse plugins:
  • Scala
  • Groovy
  • Pydev
  • OxygenXML
  • GWT/GAE
  • Mylyn
  • Subversion
Libraries:
  • Jena
  • Aperture
  • GData
  • Google Collections
Not bad, a nice little mix I think.

Saturday, September 5, 2009

I Was a Teenage Heavy Metal Nerd


The first song I ever programmed a computer to play was Black Sabbath's Iron Man. It was a TI-99/4A, and no big trick, given the reference of musical notes to frequency, the simple sound commands (and even simpler melody). Sometimes when I sit down to a big/daunting/exciting/fun software project I'll start by playing Iron Man on iTunes. Since Apple introduced the genius button, I can build a custom playlist starting with Iron Man and continuing with a pseudo-random selection my favorite old school metal bands.

One summer in college I had an internship at the National Radio Astronomy Observatory in Green Bank, WV. As a side project I was helping put together a computer readout system for one of NRAO's oldest radio telescopes. To make sure the A/D card was ready to pick up analog signals, I tested it by plugging in a walkman with a Type O Negative CD. That 40' telescope is now part of the National Youth Science Foundation summer program.

Heavy metal nerds, rock on.

Sunday, August 23, 2009

More and Better Links

Plugging In

I am at the moment marveling at an order of magnitude improvement in information flow. Not only volume, but also the tools to filter, route, search, organize, and share.

So here's a few links of interest coming through the feeds lately.

TWEET IDEAS: 13 Things to Do on Twitter Besides Tweet


Personal Health Records - Who are the key PHR providers and how are they handling lab results?

Soon we will all be e-Patients

Me, I'm working at one of those HIEs beginning to link patients directly to their own health data.

While the government attempts to pave the way for a national health information network.

So we can all have our own Personal Health Records (PHR).

A view from inside the HealthVault — Reviving The Health Revolution
The whole world is interested in this:

From Twitter, I got a report from iHealthBeat on the use of Twitter in Healthcare.

Also, a discussion of social media in healthcare.

And that's just the past few weeks. How about a medical wiki like Medpedia?

Speaking of wikis, try the Knowledge Mobilization Works Wiki / Wiki101


Not to mention the reference manuals, I'm a JEE/SOA type guy, so...

That's enough for now. I expect we will all get better at managing information, channel it from many sources into the most useful format we can manage. As the tools improve so does my skill at using them.

Sunday, August 16, 2009

A Funny Thing Happened on Facebook

It looks like people are busy recreating their old haunts online, and the whole gang is showing up.

I've been finding pages like the Dead Cafe Poet's Society dedicated to the gone-but-not-forgotten Las Vegas coffee house scene (yes such a thing did exist), and a group called Las Vegas Alternative Scene: the 1990s which is really just a collection of people who used to hang out in places like the Enigma, Cafe Copioh, Double Down Saloon, and the KUNV DJ booth. I'm looking through photos, finding old familiar faces, and clicking on them. I am literally getting in touch with people through a picture.

High School? Yeah, we got 'em. Ever re-connect with people by clicking on a 4th grade class photo? In true social network style, you connect with one old friend, look through their friends and photos, and pretty soon the gang's all here.

Before you know it, this will seem normal. Staying in touch will be the default, and all our friendships will last a lifetime. If it's not facebook, it will be something else, but the technology is proven to work.

Tuesday, August 11, 2009

What a PHR Should Never Do

What would you do if you discovered a company you never heard of had access to your medical records? Well, if you were at the Googleplex up all night coding, you might be furiously tweeting and calling everyone on the Google Health team you can think of saying WTF. Second, you would listen to the voice at the next laptop over saying DON'T PANIC because there has to be a good explanation.

And there is, sort of.

Turns out, the system wasn't hacked, nothing was stolen, but this is still a big issue. According to HIPAA standards you must explicitly give people access to your records for a limited amount of time. You must "opt in" to every application. There's this one application, which operates on an "opt out" basis. So you install an iPhone app, sign in to your GHealth account, and voila! Your records are linked to an affiliated web application you did not install.

Bad, bad, bad.

This is exactly the fear expressed when PHRs first came on the scene, and it didn't help when Google adds to their sales pitch that btw, since we're not a healthcare organization we're not really bound by HIPAA.

Technically this is not a violation by GH, but they could make some simple changes to force explicit permission for any account linking. Allowing this implicint linking is just not acceptable. The good news is they are working on a fix.

I have plenty of complaints about GH but... BUT... they are the ONLY PHR which has a public API to encourage and foster innovation, so I'm grateful for that. Everyone working on PHR applictions in the open-source community is grateful, and face it, PHRs are so new, we need all the crowdsourcing we can get. Upcoming FOSS applications like ChiefMedicalOfficer, Health Wave, and Patient Aware Wave demoed this past weekend are proof of the value in that.

Google has been, and continues to be very responsive to the developer community. That's good news on the Don't Be Evil front.

Tuesday, August 4, 2009

I Am Become Cliche

I drive my Audi hatchback down the exit marked "Downtown San Jose" and a sign showing NASDAQ's current value greets me. I have a Daffy Duck kooshball for my desk and a flying spaghetti monster on my car. How did it come to this.

Friday, July 31, 2009

Patient Identifiers as Functional Equations

Not long ago I was talking to the CIO of a major hospital - won't say which one ;) - and describing how the Brazilian national EMR uses URIs to render unnecessary the traditional names, addresses, and other identifiers which many people in the Amazon River Basin simply don't have.

URI - Universal Resource Identifier.

His reaction surprised me. First, that he had not heard of this, a nationwide paperless medical record system, the largest open source project in the world, funded in part by the United Nations global health fund. I came across it through the open source Java community. Then he asked, "Why can't we have a universal identifier?"

So right now I am telling you to forget about universal identifiers. Not in this country. Instead start thinking about something better:

Organizational identifiers and computational inferences of equivalency.

I just started studying the IHE technical framework, and I have not yet dug into the guts of the live system, but I'm already getting the feeling that this approach is more in line with how I have been thinking about things.

Thursday, July 30, 2009

Semantic Search Done Right

Someone gets it...

http://sig.ma/

Sigma, Semantic Information Mash-up, from our friends at DERI in Galway. This is a glimpse into the power of semantic search. It will not look like regular search. It will not look like Google. It will not be a simple ranking of keyword matches. It will not, in short, be one dimensional.

Sematic web, the web of data, is multi-dimensional. The days of one-dimensional search ranking are drawing to a close. I don't know what the UI will ultimately look like, but when we finally get there you will search by starting with an idea, and from that idea you move in one direction or another to other related ideas. Your sources will be ranked according to relevance and reliability. You will find new, related concepts. Keyword homonyms will no longer be an issue. This is where the web is going.

Tag your pages now.

Thursday, July 23, 2009

My Google Interview: Epilogue

I could put this badge next to a stock ticker to show the exact moment the economy went into free-fall, dragging Google's stock price and hiring budget down with it. Do I have great timing, or what? I guess you can't be in the right place at the right time every time.

A few weeks after the rejection - and my free t-shirt - I got an email from Google. It was a customer satisfaction survey! They wanted to know what I thought of the interview process, how it could be improved, etc. Did I apply for a job, or buy a home stereo? Personally I thought the whole process was great, and so were the people involved. Of the phone interview I said, "It was an efficient exchange of relevant information, exactly what I look for in a technical interview." The interviewers got high marks from me.

I never had to fill out a customer satisfaction survey after a job interview before. Sometimes I really like Google.

I was talking to Kevin, who co-organizes SV-GTUG, the developer group for all things Google in Silicon Valley. He was rejected three times. Would I apply again? Probably not. I'm glad Google exists, but I'm going in a different direction now, and I am happy to be just another silicon valley software engineer. I think it's a pretty cool thing to be.

Friday, July 17, 2009

My Google Interview, Part 2

"Everyone knows about the crazy benefits at Google," I said, "but the truth is I'm only interested in one benefit, and that's 20% time. I'll take a job as janitor if you let me spend 20% of my time working on a research and development project of my choice."

So began my series of conversations with Google. I knew that would be a good selling point, but it was also true. I had some ideas and could think of no better place to develop them. Access to the resources, talent, and knowledge base within the Gooogleplex was my #1 reason for wanting to work there. All the other stuff you hear about - free food, car washes, volleyball, the purpose of that is to make it a good place to stay and code 24 hours straight.

I like smart people, and I like solving computer problems, so the all-day on-site interview was very enjoyable. The HR recruiter greeted me in the lobby and asked if I'd ever been to the Googleplex before. I replied that I had actually been there the previous Tuesday. In fact I'm down there probably once or twice a month just through my involvement in the developer community.

I'm not going to discuss the interview itself or say anything in violation of the NDA I signed, but let me answer the most common questions. First, I did not get a tour of the Googleplex. Through the entire day I saw the lobby, the cafeteria, and a conference room. Like I said, I'm in Building 43 often, the one with the model of Spaceship One hanging from the ceiling. I do have a funny story involving Spaceship One, the X Prize, and William Shatner that I didn't get to share, but that's my only regret there. Second, no they did not ask goofy questions. I know Google has a reputation for doing that but it wasn't the case. The questions were very good, relevant to the job, and clearly tested my knowledge and skill. Third, being grilled all day was not stressful. It was fun. At the end of the day I felt great. I got to show off and fill up whiteboards with drawings and show that I know how to solve the kinds of problems they need solved. Finally, to answer the big question and remove any remaining suspense, I did not get the job.

I got a call a week and a half later and knew that was too soon to be a yes. They said it was a close decision. It's okay. I know more or less where I went wrong, I'm in good company, and I'm convinced I could get the job if I interview again. Also this was during their big stock slide and they had an unofficial hiring freeze and were laying off contractors. In any case I feel good enough getting as far as you can possibly get without actually getting an offer. It tells me they think I'm good enough to work there, and that's good enough for me.

The day after the rejection I called back and said if I can't have the job can I at least have a free t-shirt.

I got it.

Do I know how to negotiate or what?

My Google Interview, Part 1

Google has a famous (infamous?) hiring process: long waits, weird questions, or so I heard. My experience was markedly different. Someone put me in touch with Google HR, I sent my CV, and I almost immediately heard back. Later that week I had my initial phone screen, which went really well. The following day she asked when I would be available for a phone interview. I gave her some inconvenient times this week, and said anytime next week. From this point onward, I was actually putting them off. I knew it was going to be a tough technical interview, and I wanted time to study.

Some of the books I studied:

The Art of Computer Programming (Knuth), volume 1, sections 1.1-1.3, working all the exercises, for a math and algorithm refresher.

Introduction to Algorithms, the canonical college textbook.

The Practice of Programming (Kernighan and Pike), an excellent book.

Sun Certification guides for Java and JEE.

Design Patterns in Java, Core J2EE Patterns, and the Gang of Four book.

Effective Java, if you are a Java programmer and have not read this book, read it now.

The "Fielding Dissertation" on the REST interface and other ACM/IEEE technical papers.

...and many others! For the next 2 weeks it looked like a library exploded in my house.

Phone interview came and went great. It was given by a Dutch engineer who's job at Google is to classify pornography. The questions were challenging and tested my knowledge of algorithms, Java, and problem-solving. Mostly centered around search, sort, and general programming practice, he seemed impressed when I answered one question with an "except when..." and pulled out a bit of trivia on how the JVM deals with pointers to arrays of primitive types in a pass-by-value scenario. Ha! The guy is quizzing me and he had to look something up.

Two days later I was invited for an on-site interview.

Again, I gave them the, "Well this week is booked but how about next week" stall to give me still more time to study. They were merciful and scheduled it for Friday.

About this time, I'm feeling really special. I sent an email to my friends, one of whom had already interviewed at Google, which led to the following exchange.



From: Tom Wilson

to Charles, Elana, Lori, Alex
Sep 19

I got an on-site interview (with Google) scheduled for next Friday.

Yay!

-tom
---

From: Elana Silver


Sep 19


Make sure to figure how how many pandas can fit on a cruise liner before you get there. Also, how many prime numbers there are between 17 and 775.

Congrats!

From: Tom Wilson

to Elana
7:55 PM


Good idea!

1.
google search: cruise liner deadweight tonnage
10,000 tons at 100 cu ft/ton = 1,000,000 cu ft of cargo space.

google search: panda transport cage size
45cm x 40 cm x 45 cm (red panda)
= 1.48ft x 1.31ft x 1.48ft = 2.87 cu ft

1,000,000 cu ft / 2.87 cu ft

= 348,000 pandas.


2.
That would be 131 prime numbers. Algorithm follows.



static public ArrayList getPrimes(Integer lowerBound,
Integer upperBound) {
int i, j, notPrime;
int[] n = new int[upperBound - lowerBound + 1];
// list all numbers
for (i = lowerBound; i <= upperBound; i++) {
n[i - lowerBound] = i;
}
// remove non-primes
for (i = 2; i <= upperBound / 2; i++) {
j = 2;
while ((notPrime = i * j++) <= upperBound) {
if (notPrime >= lowerBound)
n[notPrime - lowerBound] = 0;
}
}
// put remaining numbers in a list
ArrayList primeNumbers = new ArrayList();
for (i = lowerBound; i < upperBound; i++) {
if (n[i-lowerBound] != 0)
ArrayList.add(n[i-lowerBound]);
}
return primeNumbers;
}

Monday, June 1, 2009

SQL Gymnastics

Every medical professional has a set of specialties, stored here in an expertise table. This uses a temporary table to define a sequence which is too complex for an ORDER BY clause. Older records are sorted by OrderNo and have SequenceNo=0, and newer records are ordered by SequenceNo. To complicate matters, records are grouped by facility where these specialties are practiced. We group each expertise together and work out the sorting, then build a new expertise table denormalizing these fields so the first five can be placed in a table optimized for fast access. This code was never used.


CREATE VIEW mso_test.ipd2MedProfExpertise AS
WITH ExpertiseList AS (
SELECT
row_number() over (
order by Med_Prof_Record_No, SequenceNo, OrderNo
) AS 'RowNumber',
Expertise,
Med_Prof_Record_No,
FacCode
FROM mso_test.Med_Prof_Expertises
)
SELECT
e1.Med_Prof_Record_No,
e1.FacCode,
e1.Expertise AS Expertise1,
e2.Expertise AS Expertise2,
e3.Expertise AS Expertise3,
e4.Expertise AS Expertise4
FROM ExpertiseList e1
LEFT OUTER JOIN
ExpertiseList e2 ON (
e2.Med_Prof_Record_no = e1.Med_Prof_Record_No AND
e2.FacCode = e1.FacCode AND
e2.RowNumber > e1.RowNumber
)
LEFT OUTER JOIN
ExpertiseList e3 ON (
e3.Med_Prof_Record_no = e1.Med_Prof_Record_No AND
e3.FacCode = e1.FacCode AND
e3.RowNumber > e2.RowNumber
)
LEFT OUTER JOIN
ExpertiseList e4 ON (
e4.Med_Prof_Record_no = e1.Med_Prof_Record_No AND
e4.FacCode = e1.FacCode AND
e4.RowNumber > e3.RowNumber
)
GO

Monday, May 11, 2009

I Dream of SQL

The dream was solid white. In front of me, hundreds maybe thousands of lines of code were scrolling upwards, too fast to read, like I was holding down the arrow key. It was SQL code with color syntax highlighting and a musical accompaniment. As each line of code scrolled past, each word would light its own color and play its own note, like a player piano, like the old days when you could place a transistor radio next to the CPU and hear the loops and jumps in your program. It sounded terrible. Out of key, out of tune, no rhythm or melody at all. Just a jumble of notes blurring into white noise.

There I was with three sheets of player piano music, written entirely in SQL code, one for each of the three major subsystems, side by side, all scrolling together. Each word in the code would light its own color and play its own note as it flew past, too fast to pay attention to the individual SQL statements, but I could see the code structure and indentation as rhythm, the overall form as tonality, and I could see the three part harmony. Nice to know those music classes are good for more than just singing around the campfire.

First I had to arrange the structure so that the three were in rhythm with each other. Once the three database structures matched, I then transposed them to the same key. At that point bulk copies could work efficiently, but the melodies still needed work. I made my arrangements through the night, sleeping soundly, until I had a beautiful song I could enjoy. It sounded great. I played the entire composition several times but to this day do not remember how it goes.

I woke up in the morning and went to work, refactoring and rewriting the SQL code until it matched the previous night's arrangement. By mid-afternoon I was running my first data synchronization tests. It worked.

The hospital had its data model.

Sunday, May 10, 2009

Dreaming in Code

I bought the book Dreaming in Code by Scott Rosenberg not long after it was published, and it's been sitting on my shelf, unread, ever since. I've read a lot of books on computer history - in fact, most all of them covering the period from the 1970s to the 1990s. After I arrived here in 1997 I've been close to many of the historic events that comprise a whole new series of books. This one, though, I should read for two reasons:

One, it's about the pitfalls and mistakes that can bring down a large software project. Some of the reviews said it can hit too close to home, but also make you feel better, as some of the top minds in the industry are not immune to the same mistakes.

Two, I have actually dreamed in code, which is the story I want to tell.

It was late in the early phase of a big project and I was responsible for building a multi-source enterprise data model for the entire hospital. The tools at my disposal were a SQL database, bulk data copies, stored procedures, and data transforms using a view/copy mechanism I developed for the physician directory years earlier - which was to be adapted one of three major portions of the enterprise architecture, the other two being physical locations and services provided.

The main problem was data flow.

I struggled for weeks with various data models, trying to fit them together, integrating the different source systems, keep the updates moving reliably from staging to production, catching errors before they do, and making the whole thing easy for one person to oversee.

Everything I tried either caused conflicting changes to overwrite each other, or entire data copies to fail. Each of the three main subsystems: physician, location, service, was an entire application data model in and of itself, and they were interlinked in increasingly complex ways. The end result, if it worked, would be amazing. Whether you're looking for a service, a doctor, or the nearest location for either, all the connections would be there.

If it worked.


(to be continued)

Wednesday, April 29, 2009

Old School Coding

ComputerWorld has an article on things programmers did in the old days and probably won't miss.

Spaghetti code, GOTO, and the FORTRAN idioms, I remember them well. Punch cards were a bit before my time. The one thing I don't miss is programming entirely in emacs and vi. Modern IDEs with code completion and visual modeling (and now semantic resource management) are a huge time saver. First code completion blew my mind, it was like the documentation automatically opened to exactly the right page as I typed. My taped-together copy of K&R C could go on the shelf. Now with visual modeling, I can rearrange my program structure to my heart's content before writing a single line of code. Don't even get me started on refactoring. It was impossible. Of course, with UNIX I didn't have to worry about the 8+3 file name limit, although I did write some TSR (terminate-stay-resident) programs in x86 assembly. I remember writing the UNIX "more" command for DOS with my own twist on error handling. Instead of "File not found" it would print, "So the bartender says, that's no file that's my wife!" What can I say, I was a rambunctious young scamp. I thought my home-built overclocked 286 with co-processor (I couldn't settle for just integer math, could I?) was total cyberpunk. It had a giant hard drive that sounded like a jet engine taking off when I booted. Even so, it was mostly a dumb terminal for the University supercomputers. The Convex C-220 had an awesome debugger. I managed to avoid Windows almost entirely until the late 90s when I moved to Silicon Valley.

Slow computers? I remember a scientist in the early 90s complaining about fast computers. In the old days, he'd say, he could load his data, run the program, and go off for a nice lunch. When he got back he'd have his results. Now, he'd complain, a data reduction that took 2 hours now completes in 20 minutes, hardly time to do anything! Also something about VMS being God's own operating system, while younglings like myself thought UNIX was God's own operating system.

Ah, memories...

Tuesday, April 28, 2009

Enterprise 2.0

The new buzzword - Enterprise 2.0, refers to private, internal social networks, for example when employees don't use Twitter, they use a twitter-like application kept safely behind the firewalls with all the access and security control applied for any internal communication. Google's find-an-expert is an early example of this. Twitter-like applications for internal communication may or may not be useful, but some kind of internal IM definitely is.

ReadWriteWeb has more.

More Healthcare IT Articles

Some recent articles on Healthcare IT and the EMR...

Improving EMR Usability: Part 1, Part 2, Part 3.

What Usability Is and How to Recognize It

The Dubious Promise of Digital Medicine (Business Week)

The Data Model That Nearly Killed Me

Thursday, April 16, 2009

Mylyn

Trying out Mylyn and Tasktop and I love it. Everyone on my project is required to use Mylyn now, and Tasktop, we're doing the free trial to integrate bugzilla, OPAS (our corp help desk system), calendars (for releases and help desk tickets), and cvs. This incorporates a lot of the ideas I've been toying with around the semantic desktop project. Now when I go back to that bug I worked on a month ago, I see exactly the files involved (say 5 files in 3 different projects among hundreds), supporting documentation, even web resources. Regardless of the source system, it's all URIs. Exactly right.

Wednesday, March 25, 2009

The Memory Box

I do lots of stuff online, and lots offline. I have documents on my desktop and web pages I open. Where is that google app engine account I got when it was beta and haven't touched since? It's in my browser history, still. Dated documents, bookmarks, emails, user history. All the related material is there, but how to find it? How to organize it? How to crawl the different things I've done on my laptop any one of which I may wish to recall at some later date? That programming library I installed last month? Where is the API, tutorial, and article I found on some google search at the time? It's all in my little electronic box of memories. I need a web of remembering.

semanticdesktop.org

Tuesday, March 10, 2009

Identity Blog

Somewhere along my Googlings I came across the subject of digital identity, which I found days later in a long-forgotten browser tab. Save it! It looks like good reading for later.

Kim Cameron's Identity Blog

The page I had bookmarked, THE LAWS OF IDENTITY, I landed there looking at what people are saying about the term Enterprise 2.0, which far as I can tell is attempting to utilize Web 2.0 social features - blogs, chat, Ergos-Bacon numbers, etc - into a private Intranet setting. The real question is: how do employees use it day-to-day to help them in their jobs? We need a usability study similar to what Jeff Hawkins did when designing the Palm Pilot. The case for usability is easy to make, and lo-fi prototyping can be invaluable in getting a product started in the right direction.

Wednesday, March 4, 2009

UML Primer

I'm not as keen on sqworl anymore, so I'm just going to list some cool blog links here.

First, I like this practical UML primer. Even if you already know this, it's good review. I've been reviewing the basics on a lot of things lately.

For more excellent reading on software development in general, there is a collection of essays known as Joel on Software.


Some Thoughts on Healthcare Web Strategy

What is a good technology strategy and how does it relate to a good business strategy? How does one determine which technology initiatives deliver the most bang-for-the-buck? Everyone knows the proper application of IT can save money by automating the most labor-intensive processes, by streamlining inefficiencies, and it can improve services and make money by bringing and keeping more customers. To fit our organization’s more altruistic goals, it can improve medical care, for example making the referral process faster, and it can offer patient services that enhance overall care.

What follow are some examples web technologies and how they can be of use. This is not what we “ought” to do, but is meant to illustrate some of the issues and factors to consider when developing a web strategy.

The New Yellow Pages

What doctors specialize in my child’s condition and practice within 20 miles of my home? Now only show me the ones with an available appointment time this week. This query is based on a new search technology, less than ten years old, but growing rapidly in widespread use. It works by transposing graph patterns, but it can be thought of as traversing the relationship from condition to specialty, specialty to doctor, doctor to clinic, and doctor to schedule. This does not replace relational databases or traditional keyword-based web search, but it does provide a new and very useful way to query data. Some data such as schedules, location, and contact information can all be retrieved from existing systems, some will need to be created and maintained. For example, how does a doctor tell the system that he practices at a certain clinic?

Using the URL-as-URI data model (Berners-Lee, 1999), he simply logs in, opens a web browser to the clinic home page, and clicks, “I practice here.”

Successful IT initiatives are evolutionary, not revolutionary, and what has been described so far is something that can be built on top of software currently in use at LPCH. It utilizes existing web content, data stores, and applications, and adds some new tools developed by HP Labs and Stanford Bioinformatics. Rather than wholesale system development, it would be constructed by adding a sequence of small feature enhancements to the existing physician search (Java version). For example, first the links between doctors and clinics (and specialties) provide a two-way reference and additional web content for each, and vCard support can provide the distance calculus and input for map APIs. Each of these is a useful feature in and of itself. If later combined with the vCal API to Microsoft Outlook or Cerner scheduling, one could find matches within a set of preferred appointment times. This is a natural growth of, and not a replacement for, technology that already works.

As a part of a larger enterprise directory, this can be combined with a variant of the FEA-RM enterprise architecture ontology (Allemang, et al, 2004), which uses a set of polymorphic nested containers, more or less independent sub-trees inheriting one of two transitive properties: comprises and comprisedOf. If A contains B, and B contains C, then A contains C and C is part of B and A. With those rules available, all a manager should have to do is add a contractor or new employee to her group, and this person is automatically also a member of the correct cost center, department, and organization, instantly appearing in all appropriate directory listings.

Information Transformation (the perspective you need)

A doctor, a nurse, and a patient all look at the same set of data, but not in the same way.

The broad goal of IT in any organization is to get the right information to the right people at the right time, and this role is even more critical in the medical field. Therefore we must take a close examination of the transformative properties of web technology.

The web is ultimately about the serialization and de-serialization of data. That was its original design intent, and that is the essence of its power today. Serialization is defined as the transformation of data from within an IT system (a database or application, for example) to a transmittable document (say, a web page or XML message). With this S/DS comes transform. There are many transforms and many data markup standards on the web today. So much, that in web systems design, build-by-restriction is as important as build-by-feature (Fielding, 2000). There are many possible serializations of a given data set, and engineers choose the one they think is best for the purpose at hand. The key to good design is selecting the most efficient transform between your data set and each of its intended uses.

An effective serialization must be designed in the context of those two data structures. In the case of patient data at LPCH, the source data structure is defined, with the Cerner EMR being the “source of truth.” Likewise, one serialization target, namely physicians, is also defined as it appears in the Cerner physician-facing UI. MDPortal, being targeted at physicians, mirrors this closely, with the patient census list being the only notable departure from the source data structure. Another serialization target, patients and families, is not so well-defined. Many PHR systems mirror the traditional data stores of healthcare, which may be deftly used in the hands of professionals, but become unwieldy to the average person. The ideal patient/family care workflow – and the most effective data structure transformations to facilitate that workflow – is not a well-solved problem industry-wide. The first team to solve it will gain a sizeable market advantage and remove one of the last barriers to widespread PHR adoption.

Communication

Much has been made of the advent of the social network, and it is indeed a significant development. So much so, that it becomes important to look at other times in history when new technology has facilitated, changed, and reshaped the nature of communication. The core principles of the past were not in fact overturned, and those who succeeded did so by adapting those principles to the new environment. The new media also tends to adapt itself to the existing world, as is the case with the size of a standard hardcover book or the shape of the iPod.

This time, the new tool is the relationship graph, which differentiates itself from group membership as well as traditional broadcast communication. As the traditional media broadcast goes to as wide an audience as possible without regard to relationship, membership, but possibly with regard to location, the social network sends communications to friends and friends-of-friends, regardless of location or other factors. Relationships can be friends, former patients, donors, employees, or anyone on an opt-in basis, but to a computer they are simply connections. Social networks store these relationships as graphs, and communications travel outward along these connections, usually no more than one or two Bacon Units (BU) away, a BU being the Erdos-Bacon number of an audience member (Milgram, 1967).

The most effective communication strategy will use all the available tools in concert.

It is also important for a healthcare organization to remember what information is under control, and what is not. The typical Web 2.0 business model – and this includes social networks – is to collect user information, most commonly but not always for advertisements. What happens on our servers is under our control, and the information can be archived for auditing, or it can be destroyed for confidentiality. User activity that happens on hospital servers (or those of business partners) can be kept private, activity “in the wild” cannot. This includes conversations, group memberships, activities, applications, and click-through. While social networking is shaping up to be a powerful communication tool, I keep to a standard rule of thumb: Don’t say anything on Facebook you wouldn’t say in a crowded room with the advertising industry’s brightest young analysts standing next to you taking notes. Also they have everyone’s email address.

Process

A lot of documentation is (or should be) created during the lifecycle of an IT project. Even the most detailed conversation about an architectural software model is no substitute for a simple UML diagram. From the first inception phases, everything must be documented, and written for an intended audience who might not have you there to explain it. Upon project completion, this documentation should tell a story, and the title of that story, “How We Built This Thing (so you can too),” from which operations manuals can be distilled. Along with the deliverable working system, a complete development and testing environment must be provided. The artifacts produced in a normal project lifecycle should lead naturally to these results.

“A problem well-stated is a problem half-solved.” Everyone understands the importance of good requirements. However, there is more than one way to present a given set of requirements. The most effective way to state requirements often depends on the technology used as well as the business need. An iterative process, starting with a first order approximation of business needs and technology capabilities, followed by refinement and convergence has proven successful in the past, and will continue to be successful in the future.

Wednesday, January 28, 2009

More Squirrels and Network Tools

I created a new sqworl page with some bookmarks. I call it Stanford Life (for a computer geek).

And now for something completely different...

Network measurement tools!

http://measurementlab.net/

Tuesday, January 27, 2009

I Like Squirrels

When I heard Google is discontinuing their notebook plugin (among other products), I went looking for a replacement. Last night I found Sqworl, which launched late last year and is a way way better tool for research, documentation (APIs and such), and reference material (semantic web ontologies, for example). Looks like it has everything I need, except easy import/export of my existing bookmarks.

The site is:

http://sqworl.com

For an explanation/review, see:

http://www.techcrunch.com/2009/01/16/elevator-pitch-friday-sqworl-one-link-to-rule-them-all/

I created a simple sample documenting Google's famous development process:

http://sqworl.com/?i=cdc23d

Friday, January 16, 2009

I Know This!

Sometimes when I face a really difficult problem, I have to look at it and say, "It's easy!" and then figure out why it's easy.

FRINITE NOW KTHXBYE

Thursday, January 15, 2009

The Brazilian National EMR

I was at a party the other night, having a lively discussion about the state of technology with James Gosling of Sun Microsystems and...

I'm sorry, but I've always wanted to start a blog post like that. Does that make me a geek? Or a nerd?

Anyway, we were discussing the Brazilian Healthcare IT system, which as it turns out has a number of interesting features, for example:
  • It is completely paperless, you do everything on a computer or cell phone, and everybody in Brazil has a cell phone.
  • By data-linking doctors' prescriptions with pharmacy refill orders, it all but eliminated drug fraud.
  • It can track influenza outbreaks in real time.
  • Your personal health record works just as well if you're a tribesman with no name living in a canoe on the Amazon River.
  • It is the world's largest open source project.
More on that later.

Saturday, January 10, 2009

KDE

Installed KDE 4.1 on my Ubuntu 8.10 virtual machine. I'm digging this virtual machine stuff, I'm starting to wonder how I ever got along running multiboot OS on bare hardware - though I did like that bootloader that showed pictures of an apple and two penguins on startup.

The NEPOMUK semantic desktop foundation is also installed, now once I set up eclipse with the C++ APIs I can get to work.