Downton Abby, the British period drama television show, has a time dilation of approximately 3.1.

Highclere Castle, location of period drama Downton Abby.
Src: Wikipedia

The first two seasons cover the period of time from April, 1912 to sometime in 1918 (assumed to be July 1918), an elapsed time of 6.25 years. But these two seasons encompass two years of our current time, hence time passed on the show 3.12 times faster than it passed in our reality.

This will have some odd effects if the show continues indefinitely. For example, by doing the math, we can see that in 2041, Downton Abby will cover the period of time from 2009 through 2012, overlapping the broadcast start of the series. We can call this the meta-phase of the show, wherein for the next fourteen years, the show will be primarily concerned with the making of the show in previous years.

The real hiccup occurs in 2056, when the show will cover the period from 2056 through 2059. Then it will transform from a period piece to a futuristic speculative fiction series.

I’ve been thinking about the web and the role and effect of social networks. While I’m a user of Facebook, and like certain parts of it, there are other aspects of it that concern me, both for the impact it’s having now, as well as for the future. As an idea person, I ponder how we can get the benefits of social networking without the costs, while regaining the open web we used to have.

If you haven’t done so, go read Anil Dash’s The Web We Lost. I’ll wait.

I’m going to cover three topics in this post:

  1. The shortcomings of social networking as they exist today. 
  2. The benefits of social networking. I don’t want to throw away the good parts.
  3. A description of what a truly open social network would look like.

The Problems of Today’s Social Networks

These are the main problems I see. I’m not trying to represent all people’s needs or concerns, just capture a few of the high-level problems.

Transitory Nature

My first problem with Facebook and Twitter is the transitory nature of the information. I’m used to the world of books, magazines, and blogs, where information is created and then accessible over the long term. Years later I can find Rebecca Blood’s series of articles on eating organic on a foodstamp budget, my review of our Meile dishwasher written in 2006, or my project in 2007 to build the SUV of baby strollers. These are events that stand out in my mind. 
Yet if I want to find an old Twitter or Facebook post, it’s nearly impossible, even if it happened just a few months ago. There was a post on Facebook where I asked for people who wanted to review my next novel and twenty-five people volunteered. Now it’s a few months later, and I want to find that post again. I can’t. (Having grown used to this problem, I took a screenshot of it, but that’s an awful solution.)
The point is that properly indexed and searchable historical information is valuable to us, our friends, and possibly our descendants. However, it’s not valuable to Facebook and Twitter, whose focus in on streaming in real-time.

Ownership and Control Over Our Data

It should be unambiguous that we own our own data: our posts, our social network, our photos, and that we should have control over that information. As a blogger and author, I would choose to make much of that public, but it should be my choice. Similarly, it should be possible to have it be private. My data shouldn’t be used for commercial purposes without my explicit opt-in, and I should have control over who gets it and how they use it.
Personally, I’d like my content to be creative commons licensed: It’s mine, but you can use it for non-commercial purposes if you give me attribution. 
Yet this is not the case today. We have problems, again and again, with Facebook, Google, Instagram, and other services claiming the right to use our material for advertising, using it commercially, reselling it, and so on.

Advertising

We should have the right to be free from advertising if we wish, and certainly to have our children not exposed to advertising. But the way social networks exist today, the advertising is forced at us, whether we want it or not. And while I can ignore it (although I still hate the visual distraction), it’s harder for my kids to do so.
We’ve unfortunately ended up in a situation where the only revenue model for these businesses seems to be advertising based, even though there are alternatives.

Siloing of Networks and Identity

I have a blog, a couple of other websites, accounts on Twitter, Facebook, LinkedIn, Google Plus, FourSquare, YouTube, and Flickr. But, for all intents and purposes, there’s just one me. We try to glue these pieces together: sharing Instagram photos on Facebook, or using TweetDeck to see Facebook and Twitter posts in one place, sharing checkins, but this is a terrible approach because our friends and readers either see the same information in multiple places (if we share and cross-link) or miss it entirely (if we don’t). Because the networks are fighting over control points, they’re disallowing the natural openness that should be possible.

Privacy

For some people, privacy is a big concern. This isn’t a big one for me because I subscribe to the basic theory of Tim O’Reilly that obscurity is a bigger concern. (He was talking about authors and piracy, but  I think the theory applies to most people, whether they’re furthering their career, starting a business, selling a product, etc.) I’m concerned about the use and misuse of my data by commercial interests, but I think that can be handle through mechanisms other than privacy. If I’m wrong, then yes, privacy becomes a bigger issue.

The Benefits of Social Networking as it Exists Today

Yet for all these complaints, there are pieces that are working.
I have a niece and her husband that I don’t get to see often, but they’re active on Facebook, and I feel much more of a connection to them as compared to family not on Facebook. I’m glad to share what’s happening with my kids with my mom. I have far more interactions with fans on Facebook than I ever had comments on my blog.
The attempts surface the content that matters to me are imperfect (to be honest, often awful), but exist in some form:
  • I don’t see a hundredth of the tweets of the people I follow, but using TweetDeck and searches on hashtags and particular people, I’m able to find many I am interested in.
  • Google Circles are far too much work to maintain, but for a few small groups of people, it helps me find the content about them.
  • Facebook’s automated algorithms are awful, showing me the same few stories over and over and over, but it’s an attempt in the right direction: trying to glean from some mix of people plus likes plus comments what to show to me. 

The Solution

I think there is a solution that combines the best of social networks and the best of the old, open web. I think it’s also possible to get there with what we have today, and iterate over time to make it better. 
What I’m going to describe is a federated social network. 
Others have discussed distributed social social networks. You can read a good overview at the EFF: An Introduction to the Federated Social Network. If you look at the list of projects attempting distributed social networking, you’ll notice that they all list features that they’ll support, like microblogging, calendars, wikis, images. You’d host the social network on your own server or on a service provider.
Despite distributed social network and federated social network being used somewhat interchangeably in the EFF article, I want to argue that there are critical differences. 
The fully distributed social network describe in the EFF article and in the list of projects feels like mesh networking: theoretically  superior, totally open and fault tolerant, but in practice, very hard to create on any scale. 
I prefer to use the term federated social network to describe a social network in which the core infrastructure is centrally managed, but all of the content and services are provided by third parties. The network is singular and centralized; the endpoints are many and federated. To continue the analogy to computer networking, it’s a bit like the Internet: we have some backbones tying everything under the control of big companies, but we all get to plug into a neutral infrastructure. 
(I’ll acknowledge that in recent years we’ve seen the weakness of this approach: we end up with a few big companies with too much control. But it’s still probably better that we have an imperfect Internet than a non-existant mesh network.)
Here’s my vision.

SocialX Level One: 

Let’s start by imagining a website called SocialX. I have an identity on SocialX, and I tie in multiple endpoints into my account: Twitter, my blog, and Flickr. 
Behind the scenes, SocialX will use the Twitter API to pull in tweets, RSS to pull in blog posts, and the Flickr API to pull in photos.
Visitors to my profile on SocialX will see an interwoven, chronological stream of my content, including tweets, blog posts, and photos, similar to the stream on Facebook or Google Plus.
SocialX will be smart enough to eliminate or combine duplicate content. If a tweet points to my own blog post, it can surmise that these should be displayed together (or the tweet suppressed), knowing the tweet is my own glue between twitter and my blog: the tweet is an introduction to the blog post. 
Similarly, if a blog post includes a flickr photo, then the photo doesn’t need to be separately shown in my stream. 
Of course, SocialX will feature commenting, like all other social networks. Let’s talk about comments on blogs first. Let’s assume I’m using a comment service like DISQUS. By properly identifying the blog post in question, SocialX can display the DISQUS comment stream exactly as it would appear on the blog: in other words, both SocialX and the original blog post share the same comment stream. Comment on my blog, and your comment will show up in the SocialX stream associated with the post. Comment on SocialX, and the comment will show up on the blog.
Twitter replies can be treated as comments. In fact, the current approach of handling related messages on twitter is obscured behind the “view conversations” button. On SocialX, Twitter replies will look like associated comments. And if you reply on SocialX, your comment gets posted back to Twitter as a reply. So both Twitter and SocialX will share the same sequence of shared content, they’ll just be represented as comments on SocialX, and as Twitter replies/conversation on Twitter.
In other words, the user interface of SocialX might look a lot like Facebook or Google Plus, but behind the scenes, we have two-way synchronization of comments.
SocialX can handle the concepts of liking/+1/resharing in a similar manner. The two high level concepts are “show your interest in something”, and “promote something”. Each can be mapped back to an underlying action that makes sense for the associated service. For twitter, “show interest” can be mapped to favoriting a tweet, and “promote” can be mapped to retweet.
So far, we’ve also discussed how a single user’s stream of content looks. In other words, we’ve looked at it from the content provider’s point of view. 
If a user named Tom comes to SocialX to view content, he can, of course, view a single user’s content stream. But Tom likely has multiple friends, and of course this is social networking, not just the web, so we’ve got to use social graphs to determine who Tom is interested in. 
SocialX will use any available social graphs that it’s connected to, and will display the sum total of them. So if Tom connects with Twitter, he’ll see the streams of everyone he follows on Twitter. If Tom connects with Twitter and LinkedIn, Tom will see interwoven streams of both. (Although SocialX will try to remove redundant entries across services by scanning posts to see if the content is the same.)
This is today. We can make this work with a handful of existing services, plugging them into a centralized network, do the work on the central network to get these existing providers connected. It’s about bootstrapping.
Notice that we don’t need everyone to use SocialX for it to start being valuable. If Tom visits the site, and follows another twitter user named Sally, we can display Sally’s twitter stream for Tom, and probably auto-discover her blog feed, making the service useful for Tom before Sally every starts to use it. In essence, at this point we have a very nice social reader.

SocialX Level Two:

The next step beyond this is an API for the platform. Rather than force the platform to do work to integrate each new endpoint, we provide an open API so that other services can integrate into the network. When the next post-hipster-photo service comes out, they can integrate with SocialX just as Instagram once did with Facebook and Twitter APIs.
The API will require services to support a common set of actions for posting, commenting, liking, and promoting. Services will be required to provide posts in two formats: a ready-to-render HTML format, as well as a semantic form that allows other services to create viewers.  (Semantic HTML would work as well.) 
We require the semantic form because SocialX can’t be the only ones in the business of rendering these streams. So SocialX will also provide an API for other services to provide a reader/viewer or whatever you’d like to call it. This enables the equivalent of TweetDeck and Hootsuites in our environment. If someone can provide a superior user experience, they’re welcome to do so.
We also need to take a stab at figuring out what content to display. Should SocialX display everything, like Twitter? Use circles, like Google Plus? Heuristics like Facebook? Have a great search ability?
Let’s open it up to third parties to figure it out. A third party can consume all the streams I’m subscribed to, and then take their best attempt to figure out what I’m interested in. And if we set up this API in a smart way, it’ll function like a pipeline, so that we could have a circle-defining service divide up streams into circle-specific streams, and an interest-heuristic take each circle and figure out the most interesting content within that circle, etc.
Services like news.me are a perfect example of existing stream filtering, they just do it out-of-band.
Newsle is another good example of a content service we’d want to plug in because these news stories are associated with people we follow, even if they originate outside someone’s own content stream.
So far we have a content API on one side of the service that allows us to pull in content from and about people. On the other side of the service, we have a filter API that can remix, organize, and filter what stories appear in the stream. And a reader API to consume the final stream and render it.
SocialX will continue to provide default, base level filtering and reader native to the service, but all content originates from somewhere else.
Now we have a rich ecosystem that invites new players to create content, filter it, and display it.
In contrast to distributed social networking systems that spread out the network, but build in the features, SocialX would distributed the features, but have a singular central network.

SocialX Level Three:

Technology businesses need to make money. I respect that. As a technology guy, I’m often on that side of the fence. Content providers want to make money. I respect that, too. As an author and a blogger, I’d like to earn something from my writing.
But I also want to be free from advertising. 
How can we resolve this dilemma?
Advertising is just one way of making money, but I’d like to suggest two other ways.

Patronage

Let’s think about Twitter for the moment. Their need to make money from advertising has led to all sorts of decisions that their users hate. They want to insert ads into the tweetstream. They want control over all Twitter clients, to ensure their ads are shown. They’re restricting what can be done with the Twitter API.
Anytime a company makes their users hate them can’t be good. 
Here’s a different idea. The more followers one has on Twitter, the more valuable Twitter is. At the very top of the ecosystem, there are users with millions of followers, whose tweets are worth thousands of dollars each. Even at the lower end of the system, a user who has 10k or 50k followers on twitter is likely gaining a tremendous value from that network.
What if Twitter charged the top 1% of most-followed users a fee? Twitter would be free to use under 2.5k followers, but followers are capped unless you pay. A fee starting at $20/year, of roughly 1 cent per follower, would raise about $200M a year — in the same ballpark as their current ad-based revenue.

Ad-Free, Premium Subscriptions

The second opportunity is to charge for an ad-free, premium experience. If 10% of Twitter users paid $10/annually for an ad-free experience, that’s $500M in revenue. Personally, I’d be delighted to pay for an ad-free experience. Part of the reason this doesn’t work well today is that my time reading is split between Twitter, Blogger, WordPress, individually hosted blogs, news sites, Facebook, Google Plus, and so on.
It’s simply not feasible to pay them all individually.
However, if I’m getting the content for all these services through one central network, and can pay once for an ad-free experience, suddenly it starts to make sense. 
SocialX knows who the user is, what they’ve viewed, which services helped to display the content.
Now we start to see a revenue model that can work across this ecosystem. Revenue could come from a mix of patronage, paid ad-free users, and advertisements. We’ll keep ads in the system to support free users, but now that we have multiple revenue streams, there’s less pressure to oriented the entire experience around serving ads and invading people’s privacy. 
Example 1: Ben is a paid-subscriber of the system. Ben’s $5/month fee is proportioned out based on what he interacts with, by liking an item, sharing it, bookmarketing it, or clicking “more” to keep reading beyond the fold. He’s going to pay $5/month no matter what, so there’s no incentive for him to behave oddly. He’ll just do whatever he wants. 
If Ben interacts with 300 pieces of content in a month, each gets allocated $5/300=1.6 cents.
Those 1.6 cents are shared with among the ecosystem partners, something like this:
  • network infrastructure: 15% (SocialX)
  • stream optimization: 15% (news.me, tbd)
  • reader: 15% (the feedlys, tweetdecks, hootsuites of the world)
  • content service: 15% (the twitter, flickr, blogger, wordpresses of the world)
  • content creator: 30% (you, me, joe blogger, etc.)
Example 2: Amanda is a free user of the system. She sees ads when using SocialX. Amanda will be assigned an ad provider at random, or she can choose a specific one. (Because the ads too, will be an open part of the system.) Ad providers will be able to access user data for profiling, unless the user opts out. 
If Amanda clicks on 5 ads during the month, that will generate some amount of ad revenue. The ad provider keeps 20% of the revenue, and the rest flows through the system as above. The revenue is allocated to whatever content Amanda was viewing at the time.
Ad providers are induced not to be evil, because users have a choice, and can switch to a different provider. 
Example 3: George is a famous actor from a famous science fiction show. He has four million followers. Only the first 2,500 of George’s followers on SocialX will be able to view his stream, unless George pays a Patronage fee. He does, which for his level of usage is $4,000 a year. However, George is also a content provider, so if his content is interacted with (liked, reshared, etc.), he’ll also be earn money. Since George is frequently resharing other people’s content, the original content creator will get the bulk of the revenue (25% instead of 30%), but we’ll give George 5% for sharing.

Conclusion 

Let me come back to some of my problems with existing social networks, and see if we’ve improved on any of them:

  • Advertising: We’ve made a good dent in advertising. By having a central network and monetization process that relies on a combination on paid ad-free experiences, patronage, and advertising, we’ve taken some of the pressure off ads as the only revenue model, and hence the primary force behind the user experience. We’re allowing people to select their ad provider, so they can choose if they want targeted ads or random ads, or organic product ads, or whatever they want. Ad providers can’t be evil, or customers will switch providers.
  • Ownership and Control Over Our Data: SocialX owns very little data. It resides in the third party services. When users have choice over one blogging platform or another, or hosting one themselves, then they will regain control over their data by being free to choose the best available terms or by hosting it themselves. 
  • Privacy: 
    • My primary privacy concern is over the commercial use of my data, and in this regard, I have much more control. I can choose to use a stream filtering service which profiles me and my interests and receive a more personalized stream, or I can choose not to. Either way, the data is only used to benefit me. I can pay to opt-out of advertising totally, or opt-out of targeted advertising at no cost. 
    • I haven’t really thought through the scenario of “I don’t want anyone but a select group of people to see this content,” the other type of privacy concern. My guess is that we could solve this architecturally by having selectable privacy providers that live upstream from the filters and readers. These privacy providers would tag content with visibility attributes as it is onboarded. 
  • Transitory Nature: My concern here was the case of being able to find a given Facebook post where I had solicited beta-readers. In the SocialX case, I see a few fixes:
    • Some of the “stream filter providers” could be search engines. 
    • I could have chosen to originate my post as a blog entry.
    • The platform could support better bookmarking of posts. 
  • Siloing of Networks and Identity: By it’s very nature, this is the anti-silo of networks and identity.

The main problem we’re left with is that we need a benevolent organization to host SocialX. Because it is a centralized social network, someone must host it, and we have to trust that someone to keep it open.
A few years ago, I was sure this was going to be Google’s social strategy. It seemed to fit their mission of making the world’s information accessible. It seemed to be a platform play akin to Android. Alas, it hasn’t turned out to be, and I no longer trust them to be the neutral player.
It could be built as a distributed social network, but then we’re back to the current situation. Lots of distributed social networks, but no one has the momentum to get off the ground. 
If you’ve made it this far — thanks for reading! This is my longest post by far, and I appreciate you making it all the way through my thought experiment. Would this work? What are the shortcomings? How could this become a reality? I’d love feedback and discussion.

New landing page for
A.I. Apocalypse

I created new landing pages for A.I. Apocalypse and Indie & Small Press Book Marketing.

In the case of A.I. Apocalypse, I felt it needed a proper landing page without the distraction of the blogger right hand nav column. You can find it at aiapocalypse.com.

In the case of Indie and Small Press Book Marketing, it really needed it’s own blog, a place where I could have both publishing news as well as more in depth articles on book promotion. You can find it at indiebookmarketing.com.

Please check them out, and let me know if you have any questions or feedback.

New home for Indie & Small Press Book Marketing

A.I. Apocalypse was nominated for the Prometheus Award for Best Novel for 2012.

It didn’t make the cut to the finalists, but other awesome novels, including Suarez’s Kill Decision, Doctorow’s Pirate Cinema, and the Kollins’s The Unincorporated Future, did make the finalists. As these were some of my favorite novels of last year, I can’t begrudge them a bit.

Read the full press release from the Libertarian Futurist Society.

This is an amazing deal: Audible just bought Avogadro Corp and A.I. Apocalypse audio books on sale for $1.99 each!

As I don’t have any control over Audible.com pricing, this is an exciting opportunity to pick the audio editions up at a significant discount compared to their usual price of $17.95. I don’t know how long it will last, so take advantage of it while you can!

Wow, somehow I neglected to post my notes from the March Willamette Writer’s talk by William Nolan. Sorry!

William Nolan
Co-author of Logan’s Run
Willamette Writer’s Announcements
·      Open house at Willamette Writer’s House on April 21st from 3pm to 8pm
·      WW Conference will be a little different this year: new tracks on self-publishing, Thursday night master classes.
William Nolan
·      Started writing at the age of 10.
·      Made his first sale at 25.
·      Been writing for 75 years, 60 years of it professionally.
·      My mother kept the first piece of writing I ever wrote. A terrible poem with misspellings. She keptI still have it.
·      By age 10, writing adventure stories.
·      Wrote a story about a crime fighting snake.
·      You can do a lot of bad writing when you’re young, and you never know it.
·      If I saw those stories for the first time, I’d say that the author should not become a writer.
·      Most famous for Logan’s Run.
o   There’s been a remake in the running for 19 years
o   Would love to see a remake because the 1976 movie had so many dumb mistakes, and lacked special effects.
·      How did you write Logan’s Run?
o   I was 27. It was my first novel.
o   I went to a lecture at UCLA. Charles Beaumont (Twilight Zone) Challenge to distinguish social fiction and science fiction. Came up with an idea, then thought maybe he could make $50 on a short story.
o   Then George Clayton Johnson said let’s write a screenplay.
o   Nolan said let’s write a novel first, and then the screenplay.
o   They took turns writing in a motel room for three weeks, spelling each other at the typewriter.
o   Nolan wanted to just sell it for $250 to Ave.
o   George said “you promised a screenplay”
o   They wrote the screenplay, got offered $60,000 by MGM.
o   Went for an agent. Decided to hold out for $100,000.
o   From Friday to Monday the offer went up from $60,000 to $100,000. (A ton of money for the 1960s.)
o   They threw our Nolan and George’s script
o   The commissioned one has illogical stuff.
o   The directory said “Science fiction doesn’t need logic”.
o   But science fiction needs logic more than anything else. You’re developing a fantasy world, and you need it to hang together coherently.
o   The MGM movie was a disaster. The actors were good, because they were British trained on Shakespeare.
·      Hollywood is just bizarre: Got asked to make a movie just like Zorro, except not named Zorro. They wanted a guy in a mask, with a sword, who wrote his initial on walls, and with a mute Indian sidekick . So he wrote “Nighthawk Rides” at their request, then they sent it to the studio, and the studio rejected it as being too close to Zorro.
·      Written 200 short stories. 88 books.
·      Ray Bradbury, one of his closest friends for over 50 years.
o   Nola did first scholarly article on Bradbury.
o   Would go to the magic castle. Could only go if you were a magician. Ray was. They’d went to a Houdini séance at the castle, but Houdini never showed up.
·      Grew up in Kansas city, for 19 years, then went out to California, then up to Oregon, now in Washington.
·      See The Intruder
o   Written by Charles Beaumont
o   Directed by Roger McCormick
o   William Shatner’s first role
o   Gene Cooper was in it.
o   Lots of science fiction people in it.
o   The actors only got a single sheet of notes each, didn’t even know what the picture was about, or what was going on.
·      Scriptwriting is one thing and prose is another
o   You have to change the whole method of presentation for a screenplay.
o   A novel has a character with interior thoughts and desires.
o   With a screenplay, you’ve got visuals and you’ve got dialogue.
o   You have to completely eliminate interior thoughts.
o   [You have to rely on the director and actors]
o   Novel -> Synopsis -> Coverage (one paragraph) -> Sentence
§  “High Concept”: originated  with a producer who was too coked out to read the coverage
o   The first thing they do when they buy a novel is throw out the novel.
·      Writing is also a choice of what to expand and what to condense.
o   Beginning/bad writers focus on exactly the wrong things: they’ll spend a page on walking into a room, and then say “he meets the girl”.
o    

3D portable printer,
big theme at SXSW 2013

I attended SXSW Interactive for the fifth time this year. My first South-by was in 2003, when hot topics  that year included wikis, blogging, and augmented social networks, and all the panels took place within the confines of the third and fourth floors.

SXSW has come a long way since this, but it’s still a mind-blowing and fun week, full of networking opportunities, chance encounters, amazing speakers, and new technology.

Here are the highlights of this year:

1. 3D Printing is big. No, huge.

Multiple panels covered the topic every day of the conference. 3D printing isn’t just about devices churning out plastic trinkets. It’s about revolutionizing the world of all manufactured objects, in the same way that the moveable type printing press revolutionized printing, and more recently, ebooks and print-on-demand revolutionized the publishing industry.

Future of 3D Printing Session

Current state of the art is single-material composites and metals, but coming within a few years we’ll see multi-material printing as well as embedded circuitry.

Although it wasn’t really discussed, one of the big missing aspects of the 3D talks was the topic of an ecosystem play. In the same way that Apple came to dominate the world of music for years, and then later the appstore ecosystem, and in the way Amazon dominates ebooks, there will be the opportunity for someone to own the object-store ecosystem, which will dwarf every other platform out there.

3D printed custom
doll from Makie

Some of the things currently being 3D printed include: dolls, clothing, dishes and glasses, plastic items of any design, toys. And in the design labs they are experimenting with: meat, living (and re-attachable) mice limbs, circuitry, and morphable objects.

Panels:

2. Artificial Intelligence is the future of user interface design.
Many panels also covered artificial intelligence, but the kind that makes user interfaces smarter, more predictive and personalized. 
A Robot in Your Pocket Session
Examples of this include filtering from among many options to provide the most relevant. An example would be a smartphone transcribing voicemail, using a history of the interaction between two people to figure out the right vocabulary to use, to figure out which “Tom” two people would be likely to refer to, to understand a voicemail reference to “the address I emailed you”, and be able to resolve it.
Example progressions:
  • Progression
    • Analogy: Brakes
    • Digital: Antilock
    • Robot: Crash avoidance
  • Progression
    • Analog: thermostat
    • Digital: timer thermostat
    • Robotic: Nest
  • Information
    • A: Encyclopedia
    • D: Google Search
    • R: Google Now
Panels:
3. Self-Publishing is More Powerful Than Ever
Self-Publishing in the Age of E Session
This is obviously a personal interest of mine. There were actually very few panels at SXSW this year on publishing, content, or journalism, especially compared to years past when there were entire tracks on these topics. I heard a large number of people echo my disappointment. Publishing and journalism are still very much industries in turmoil, changing daily, and it seems like a shock that SXSW has moved on past that.
That being said, there would two very good talks:
4. Design as Innovation / Responsive Design
Design was a big topic, including both theme of designers are the new leaders and drivers of innovation in company, as well as the responsive design, the UX pattern of how to deal with different devices. Although I attended only a handful of these panels, it was a big topic of discussion, and there were many more panels I didn’t get to attend.
Changes from Past Years
SXSW is always evolving. Some things I noticed:
  • They had less total talks. Last year I remember that there were 65 different sessions in a single timeslot. On the plus side, things were more centralized, but on the negative side I heard many stories of people who didn’t get into talks they wanted to. There also wasn’t a journalism/publishing/content track, and perhaps that was one of the things to go.
  • There were many more foodcarts around, and for once it was relatively easy to get food between sessions.
  • Wireless access was better. I had only a single half hour without access, and that was at the Omni hotel. 
  • Twitter and Foursquare still in heavy use.
  • The bar at the Driskill is still the go-to place for networking in the evening.

A Manhattan Project of the Mind (or Brain Wars)
Sharon Weinberger, @weinbergersa, Columnist BBC.com/Future
Presentation at SXSW
#brainwars

·      Background
o   Do a weekly column called “Code Red”
o   Write about the Pentagon’s role in neuroscience
o   For ten years I’ve written about the technology the Pentagon chooses to fund.
o   About 6 years starting writing this articles.
o   After writing these articles, starting getting thousand of letters from people who claimed to be experimental test subjects.
§  Whether these people are right or wrong, they are googling what the Pentagon is doing, and finding out that in fact, the Pentagon does have technology to make voices in people’s heads.
o   This is partly about neuroscience as a weapon.
o   What are they really doing, and what are they not doing? What’s the hype and what’s the reality?
o   There’s some good science, and some bad science.
·      You can trace the Pentagon’s interest back to:
o    J.C.R. Licklider’s vision in 1960: a man-compuyter symbiosis.
§  Seems obvious today, but in 1960, the notion that a computer wouldn’t just crunch numbers, but would interact with you and help you make decisions.
§  The game Missile Command is similar to a real problem the air force had in the 1950s, and hence developed Sage, a system for monitoring and tracking incoming missiles.
o   Jacques Vidal’s “Toward direct brain-computer communication”
§  Got funding from DARPA as a basic science project to use observable electrical brain signals to control technology.
·      DARPA Director’s Vision 2002:
o   Imagine 25 years from now where old guys like me put on a pair of glasses or a helment and open our eyes. Somewhere there will be a robot that will open it’s eyes…
·      Duke University Medical Center in 2003
o   Taught rhesus monkeys to consciously control the movement of a robot arm in real time, using only signals from their brains.
o   Crude approximation, takes a lot of training.
o   But it works
·      Augmented Cognition (AugCog) 2003/2004:
o   Goal for order of magnitude increase in mental capacity.
o   Want to help soldiers manage cognitive overload.
o   Vision of Augmented Cognition 2005
§  Video showing how sensors can be used to detect overload by brain. When too many streams of information threaten to overload here, the user interface is streamlined to highlight certain elements and reduce others (e.g. maximize text, minimize images).
·      Neurotechnology for intelligence analysts
o   They look at hundreds of images each day, trying to glean information.
o   Scientists wanted to watch the P300 signal (object recognition), to see if they could help the analysts better spot things.
o   In theory, they could detect the signal faster than the consciousness can interpret it. There’s 300ms delay in the conscious brain.
§  We don’t totally understand why there is the delay.
·      In 2008, did project called “Luc’s binoculars”
o   Wanted to use binoculars and P300 signals to help identify objects of interest.
·      In 2012, actually have a system…soldier with EEG in the lab. But to actually develop technology to use in the field, it is much harder.
·      Neuroprosthetics: 2009
o   By 2004, 2005, and 2006, one of the biggest problems was roadside bombs. Lots of soldiers losing limbs.
o   Modern prosthetics are cable systems: you clench a muscle in your back, it is sensed, and moves a cable to move the arm.
o   It’s very, very hard.
o   Our understanding of which neurons do what is still crude…it’s probabilistic.
o   Mechanical arms are still more useful.
·      2013: Brain Net
o   brain-to-brain interface in rats
o   Same guys at Duke who did the rhesus monkey brain implant
o   They linked two rat brains… one is the encoder, and one is the decoder.
·      Other Directions: Narrative Networks
o   neuroscience for propaganda”
·      Future Attributes Screening Technology
o   When you go through the airport, a subset of agents are trained to specifically look for suspicious behaviors: facial expressions, body language.
o   DHS want use remote sensors to look at physiological indicators: heart rate, sweating, blood flow in face, etc
§  They want to identify “mal-intent”. Whether you harbor the intent to commit a crime.
o   See Homeland Security Youtube video
§  Future Attribute Screening Technology
§  Battelle / Farber
o   All sorts of issues: Why are people nervous? Because they are going to commit a crime? Or because they’re skipping work to go to an event? Or because they ate a ham sandwich?
§  Decades of research have shown we still can’t reliably detect lies.
§  We certainly can’t detect mal-intent.
·      Future Directions: Smart Drugs
o   No formal studies done.
o   Anecdotal reports: 25% of soldiers in field use a smart drug such as Ritalin or Adderall.
o   Should we test smart drugs?
o   Possibly the government is staying away from it because of the long history of problematic research done in the past (LSD experiments by CIA), plagued by ethical concerns.
·      President Obama’s Vision 2013
o   Unlocking and mapping the brain. Wants to flow billions of dollars into. If that happens, DARPA will be one of the major sources of funding.
·      Hype vs. Reality
o   Brain controlled drones?
§  The technique is slow. 10s of bits of information per minute, and subject to noise.
§  Not obvious that it can be used yet for field applications.
·      Where are we today?
o   Brain-computer interface already here in limited capacity: very limited, very crude, don’t work well.
o   Neuroprosthetics still years away.
o   Deception Detection: very little agreement, even on the basic science.
o   Mind-controlled drones: a generation away.
·      Implications
o   Technological: expands the battlefield through telepresence
o   Ethical: human testing questions…is it an even battlefield for augmented soldiers?
o   Do we have the right to brain privacy? Do soldiers?
o   It’s hard to have a serious debate about futuristic technology. (The giggle factor)
§  Will: interesting idea, given increasing pace of technology, we need to talk about the future, but it’s hard to do.
·      Was able to participate in experiment using FMRI to try to read associated brain activity
o   In two hour session, surrounded by ton of gear, had a difficult time being able to even detect what part of brain associated with tapping her finger.
o   If it was that hard to detect such a simple thing, then reading deeper thoughts, reading minds, is very far off, even if it is ever possible.