Updating from API 1.0 to API 1.1

I knew this day would come, but I just could not get around to working on this. Well, now I have to since 1.0 has officially been shut off. So now I’m fixing up both TwittFilter and NewsSnacker.  I will also explain some of the updates on this site.  But in short, all we are going to do is:

change our API call from /1/ to /1.1/

Change def cURL calls from XML to JSON

Change all parse function to deal with JSON instead of XML

And I think the label for 1 or 2 API calls were changed.

More coming later.

Its time to start an errata page. I will…

Its time to start an errata page. I will put this together later this week where people can submit corrections and update. Codes and API’s change all the time, so I think this page will only grow over time. If you have something you want to submit now, go ahead and leave a comment on the Errata page.

Twitter launches a new (Updated) developer site

Its no secrete the developer content on Twitter has been a word in progress. So much so I spent a good deal of time discussing it in the book.  Well, of course, now that the book has been released, they update the thing.

Today we’re launching the new version of our Twitter Developer Site. With over 1,000,000 registered applications and 750,000 developers building on the platform today, we needed a new home to support the Twitter community better. So, we listened to everyone and gathered your ideas. The new site enhances communication channels, offers improved reference material and documentation, and will foster better interaction for everyone who visits it.

Here are links to the new sections. It’s much cleaner and you can tell from the number of posts that its just taking off now.  However, the most important changes are to the Documentation as posted earlier. The changes are not signification enough that you will feel lost.

Sorry, you need to scroll all the way down to see the list of links. I’m going to try and fix this bug in the theme later… Or just get a new theme. 🙂

  • Discussions – We need a place to talk with each other that gives us more functionality than we have now.
  • Developer Blog – The new blog provides a place to learn about important API announcements, events, tips and how-tos, case studies on great apps, product insights, and more.
  • Better Documentation – The docs have better structure and searchability and should feel more intuitive.
  • Improved Apps Management – The new app manager has a streamlined design that provides more comprehensive information for your app.
  • Enhanced Search – Powered by Apache Solr, searching the new dev.twitter.com also got a boost. We have a unified search engine with filters and expect results to be more relevant. Also, we can search the archive from our Groups mailing list.

New field in tweet objects with the REST & Streaming APIs

From Twitter directly…
episod
@episod Taylor Singletary

Hi Developers,

Beginning today you may notice a new boolean field in API responses & streams containing tweets: “possibly_sensitive”. This new field will only surface when a tweet contains a link. The meaning of the field doesn’t pertain to the tweet content itself, but instead it is an indicator that the URL contained in the tweet may contain content or media identified as sensitive content. During this initial testing phase, there’s nothing you need to do with this field and the field values cannot be relied on for accuracy. In the future, we’ll have a family of additional API methods & fields for handling end-user “media settings” and possibly sensitive content.

If you’re curious how this field will ultimately be used, we recommend that you read the following user support articles:

Let us know if you have any questions or concerns not answered by the support articles above.

Twitter Seach and examples updated

This post totally cut and pasted from Luis Gray.
Since Twitter acquired Summize way back in 2008, the company’s search engine has been one of the biggest question marks – for while use of the network has risen dramatically, the company has been unable to keep historical data beyond a few days. Recent enhancements, which Twitter termed personalization, help to separate quality updates from junk, but for the most part, the situation with the search engine remains the same. Today, the search engine saw flickers of life with a design revamp that brings the front-end of the engine in line with Twitter’s newish Web interface. It also brings forth the “Promoted” search queries which the company is relying on for revenue.

Over two years ago, I talked about how Twitter’s search engine became increasingly less useful over time thanks to a shrinking index and oddities, like being unable to find any tweets from specific users, or missing data, even when search operators were used. At the time, I asked if this would be a “temporary blip” which I hoped would “come back soon”, but the company has prioritized other features. In the meantime, a deal with Google to provide realtime updates in their search results lapsed. So we’re still stuck with the few days of results, just in a prettier format.

The new Twitter Search Front End, Including Top Trends by Your Geo

In addition to the cleaner look of Twitter search, the service also has a new example pop-up for search operators. While the practically ancient “flight :(” example held over from Summize remains, new are example searches including “from:alexiskold”, “to:techcrunch” and “@mashable”, nods to the GetGlue founder and top blogs who give Twitter a lot of press.

Also included? The optimistic operator: “superhero since:2010-12-27” which says it will return results “containing “superhero” and sent since date “2010-12-27″ (year-month-day)”. If you do run that query, you’ll get responses dating back all the way to July 23, 2011. Where the rest of the 7 months’ results are is anybody’s guess.

Search Operator Options on the New Twitter Search

Despite one’s social networking preferences, the data inside Twitter is extremely valuable. The company really could have a lead on being the realtime pulse of the planet. This makes prioritizing new tweets the most important, but I’d bet the world could benefit from more than a week’s worth of content.

New online API docs for Twitter

Well yet another change at Twitter, but this one I think was needed. In the book I talked about the old twitter Docs and the new set they have been working on. Well its changed again. Its cleaner, much easier to read, but it looks like its still the same information with SOME of the info being generated directly for the code. Not sure which bits, but that is what I heard directly from the guys at Twitter. At some point I’ll dig deeper into what is automated and what is not, but as for as the book goes, this should not prove to be an issue for the reader.

Changes to OAuth are causing issues with TwittFilter

At the beginning of July, there was a change in OAuth that restricts apps from reading direct messages.  To fix this, you need to go to your app settings in Twitter, https://dev.twitter.com/apps.

Change it from read & write, to read & write & direct messages.

Next you need to change a line in twitteroauth.php .

Around line 91 find the following..

function getAuthorizeURL($token, $sign_in_with_twitter = TRUE) {

Change TRUE to FALSE.

Then look for these 3 lines..

if (is_array($token)) {
$token = $token[‘oauth_token’];
}

Add the following line..

return $this->authorizeURL() . “?oauth_token={$token}”;

For awhile, this was not working for me till I realized that the OAuth token will be changed.  As such, I have to delete the token information out of the database each time I clear the session.  Course, this is proper practice anyway, so I was a victim of my own oversight.

Now, I just need to look for the 403 error and ask the user to clear there session and log back in again.  No fun for the user, but thats how it goes.  For those people how are using the automated processes of TwittFilter, well, I guess I will have to send an email out letting people know they have to go into TwittFilter, logout and log back in again… Sigh.. Thanks Twitter.

Now Apple’s updates are driving me crazy

This book has been way harder and has taken way longer to write then I ever expected.  The main reason this has taken a long time was Twitter changing and updating its API.  From when I started writing, the number of API’s have more then doubled by the time I was doing the final reviews.

Well, after the book done and going to print, there was yet one more change the messed with the book.  Apple announced that support for Twitter would be build into iOS5.  Arrgg!!  We did 2 chaps on setting up a iOS dev environment, adding an oAuth xAuth lib and making a tweet.  Now with this update, we do not need to worry about oAuth x Auth.  Now there are still good thing to learn in hour 23 and 24, especially if you are new to writing for the iOS, but I would say 1/2 of the content does not matter anymore unless you are writing something custom.

Thanks Apple and Twitter.. you guys drive us nuts.

Finally updated all my PHP code. Waiting…

Finally updated all my PHP code. Waiting on Bess to get her code together so I can upload 21-24. The book is already for sell electronically as well as pre orders for the printed book. I put a link on my regular site. www.perivision.net/wordpress . Still need to redesign the theme for this site. The book is on shelves and accessible July 28, so I still have a month to get this sorted.

…And we’re back. Finally finished the…

…And we’re back. Finally finished the book. So much for using this site to log the writing adventure, but I think recalling back all that happened and documenting it here is still a good idea. Right now I pushed in the final edits for Hours 1 – 20. Bess Ho is working on Hours 21- 24. She has been having computer issues and experiencing the same curses I have I have been enjoying, changing specs. Seems that the iOS4 update really changed things and the same with an Android update. Fun.
Oh well, at least that part is done. Now I need to attend to this website to support readers who I’m SURE will find errors in the book, connect understand something or just questions in general.
First step? Get the source code uploaded. Now the LAST thing you want to do is simply dump the code on the site. Easy way for robots to come by and sweep everything up. No better to zip up each hours code and lets the users download it that way. So.. Guess I still have more work to do eh?