Showing posts with label XML. Show all posts
Showing posts with label XML. Show all posts

Apr 16, 2010

Success!!

I have been working hard trying to get my site to work. I have only been able to retrieve data using the proxy pass-through and return the XML in raw format. I wasn't able to get anything to display properly.

Initially I used the Chronicling America API which gives newspaper information. Since I couldn't make it work and display, I tried the Wine Blog API. Then I had success! I was able to retrieve the XML with the proxy pass-through and parse it in my JavaScript to display the title and description.


It may not seem much to some, but I'm very excited that I finally got something, anything, to display properly. I plan to continue to see what other things I can do with this. The endless hours of working on this has paid off.

Individual Iteration 3

Well I was not as successful as I wanted to be for this iteration. My plan for my project was to use the Chronicling America API with the Google Maps API to display newspapers by location. This is how my site looked for iteration 2.



I started to parse the XML but I had a lot of difficulty getting it to work. The information would not display properly. I tried to parse the XML in JavaScript without success so I am now trying to parse in the PHP instead.

Besides completing the requirements for this course, I would still like to complete this iteration for myself. Creating Mashups and integrating web services is a great skill to have. I plan to continue working on my project and hope that I can get it to display successfully.

Mar 24, 2010

AJAX is a PAIN

I just posted on about the great stuff I was finding on XML.com. I found something else that will be of interest to all of us here.

Bruce Perry posted a blog about using a open source Javascript library called Prototype. Why did he do this? Here are his own words on the subject.

"Why didn't I just create a plain old JavaScript object (POJO) for my application, instead of introducing an open source library? For one, Prototype includes a nifty collection of JavaScript shortcuts that reduce typing and help avoid the reinvention of the wheel. The commonly touted shortcut is $("mydiv"), which is a Prototype function that returns a Document Object Model (DOM) Element associated with the HTML tag with id "mydiv". That sort of concision alone is probably worth the cost of setting up Prototype. It's the equivalent of:
document.getElementById("mydiv");

Another useful Prototype shortcut is $F("mySelect"), for returning the value of an HTML form element on a web page, such as a selection list. Once you get used to Prototype's austere, Perlish syntax, you will use these shortcuts all the time. Prototype also contains numerous custom objects, methods, and extensions to built-in JavaScript objects, such as the Enumeration and Hash objects (which I discuss below).

Finally, Prototype also wraps the functionality of XMLHttpRequest with its own Ajax.Request and related objects, so that you don't have to bother with writing code for instantiating this object for various browsers."


He goes on to SHOW YOU how to set up your files for using prototype by adding certain lines of code and files. And finally he finishes up the blog with examples on how to use the library.

Mar 18, 2010

Co-Inventor of XML goes in on Apple


First I want to give you guys an update on what's happening with the Global IT Club. Well as everyone knows this is Ethos Week and today former Vice President of Intel Corporation, Ken Fine, will be speaking at 5:30pm. Next week, we will be making a field trip to Menlo Innovations. This is a great opportunity for anyone that is interested in the company to get an up close look at what happens on the inside. The field trip will be next week Friday, March 26th. If you are interested in attending or have any questions, let Jenelle, Chris, me or Professor Drake know.

Now for my main feature story, the punches between Apple and Google just keep rolling. Yesterday, newly hired Android developer, Tim Bray, went off on Apple on his blog. Bray who recently left Window Mobile to join Google, expressed his dislike for the way Apple handles its app store and developers. He explained that Apple's "vision" of its internet future "omits controversy, sex, and freedom, but includes strict limits on who can know what and who can say what. It's a sterile Disney-fied walled garden surrounded by sharp-toothed lawyers. The people who create the apps serve at the landlord's pleasure and fear his anger."

Those are some really harsh words. But Bray did say that as much as he hated that aspect of Apple, he still thought that the iPhone's hardware and software were both great. I really have to agree with everything he said. Recently Apple removed thousands of apps from the App Store that they felt were inappropriate. Some just showed women in bikinis or had names such as iBoobs but didn't actually show any nudity. They removed such apps but kept the ones made by "Big" companies such as the Sports Illustrated Swimsuit App and the PlayBoy App. Why keep some and get rid of ALL others? I just don't get it and thats why I feel Android has a leg up on the Iphone.

This Article can be found on MACWORLD via Macwold UK

Mar 17, 2010

Indivudual Iteration 2

For iteration 2 I changed my plans for the API's I was planning to use. Originally, I wanted to create a site using the Blogger API in combination with the Google Talk API to allow users to have live chat with others that were in the blog site. However, this seems to be more complicated than I expected. Google Talk API uses XMPP or Extensible Messaging and Presence Protocol. In order to connect to Google Talk or any other service that supports the Jabber/XMPP protocol, you'll need to purchase Trillian Pro. So I decided to go a different direction with my project.

The new plans for my site is to use the Chronicling America API along with the Google Maps API. The Chronicling America API gives information about historic American newspapers. So far I have completed the iteration 2 requirements and have set up the Google Map API and am able to return the XML for the Chronicling America API using an onClick button.



The next step is to complete iteration 3. I plan to take the XML that is returned and parse the information to display the newspapers by location on the Google Map for Michigan. This iteration seems like it will be more difficult than the others, and I hope that I will be able to complete this task.

Mar 5, 2010

Zillow Real Estate API

For our group project we wanted to provide some housing statistics for people who were looking to relocate.The API that we chose to do this was the Zillow API which is a real estate API that allows you to do all kinds of searches for housing information in different areas. The types of info I was looking for was median home prices as well as other pertinent info. I used the following URL to get the data:

$zip = $_REQUEST['zipID'];
$url = 'http://www.zillow.com/webservice/GetDemographics.htm?zws-id=X1-ZWz1c2hu5r9fd7_5f91a&zip=';
$fullUrl = $url.$zip;

This basically provides a XML output that I used to grab the data. Below is the output from the query.



Once I had the data I then grabbed the Zip code returned as well as all of the attribute tags and its child tags. I did this withe the following code.

$zip = $xml->xpath('/Demographics:demographics/response/region/zip/text()');

$attr = $xml->xpath('//attribute');

Once I had all of the attribute tags into the array I was able to access them by using a for loop in order to iterate through all of them and return them back to the Javascript code and finally to the users browser.

for($i=0; $i<10;>name
echo $attr[$i]->values->zip->value[0]
echo $attr[$i]->values->nation->value[0]
}

You can see that the Xpath methods use a object type of structure to access the child nodes. The -> denotes a child node and once you know the path you can traverse all of the tags. For instance the query echo $attr[$i]->values->zip->value[0] is used to access the following data:



Once you have grabbed all of the data you can simply use a series of echos from the PHP script to send the data to the Javascript in a nicely formated string.



Anyway, the final output looks like this. I is not to fancy but it provides the data necessary.

7 Day Forcast WeatherBug

For my individual project I used a simple current conditions search to retrieve the current conditions for an area. However, for our group project I wanted to provide a 5 or 7 day forcast. While this wasn't much different for getting the data, parsing through the results was much different as I had to create loops to search through the iterations. See my previous posts to see the code behind in order to return the data but for this post I'm going to show you how to search through the data and parse the XML. First I used the following URL to get the data.

http://A6357896562.api.wxbug.net/getForecastRSS.aspx?ACode=A6357896562&OutputType=1&zipCode=48189.

This returns the following data:




After I get the data it is time for parsing. I use the SimpleXML Xpath function to get the data into a searchable variable.

$xml = new SimpleXMLElement($data);

Then I simply create a for loop to iterate through the data and diplay the results in HTML format. Since there are HTML tags I can't cut and paste the text into the blog as it tries to interpret the tags. However, I have put up a screenshot of the for loop and the methods used to optain the XML tag info HERE.

Here is the final output:

More on Parsing XML

I have been working quite a bit more on the parsing portion of Iteration 3 for the projects and I have found that using Xpath is both a very simple as well as a effective method of parsing the XML. So far I have been able to find tons of documentation as well of examples of how to search for the tags that you need as well as using that data. I used the weather example in my last post but for this on I will use the CareerBuilder API as the example. Here is the beginning of the PHP script:

$zip = $_REQUEST['zipID']; //Here I get the variables from the Javascript
$job = $_REQUEST['jobID'];

$url = 'http://api.careerbuilder.com/v1/jobsearch?&DeveloperKey=WDhb6SR73QBGH8TTJV4T&Location=';
$url2 = '&Keywords=';
$fullUrl = $url.$zip.$url2.$job; //Here I concatenate the strings into one URL

Here I pass the PHP script two variables from my Javascirpt, a job to search for as well as a zip code to search within a given location. Here is the javascript so it makes a bit more sense. I pass the job variable as a parameter while the zip variable is a global so it can be accessed anywhere in the Javascript.

//*********************************************************************************

function getJob(job) { //Here I'm passing the job variable to the function
jbRequest = jobRequest();
if (jbRequest == null) {
return;
}

url= "scripts/career.php?zipID=" + escape(zip) +"&jobID=" + escape(job); //Passing to PHP
jbRequest.open("GET", url, true);
jbRequest.onreadystatechange = displayJob;
jbRequest.send(null);

}


Once in the PHP script I do the standard request for the data

$ch = curl_init();
$timeout = 5;
curl_setopt($ch, CURLOPT_URL, $fullUrl); //Here is the full URL variable
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, $timeout);
$data = curl_exec($ch); //Here I put the request return into the data variable
curl_close($ch);


Next I use SimpleXML to get the $data variable into an accessible variable.

$xml = new SimpleXMLElement($data);

Next I start searching for the data. Here is the output from the URL that is passed to the API.

http://api.careerbuilder.com/v1/jobsearch?&DeveloperKey=WDhb6SR73QBGH8TTJV4T&Location=48189&Keywords=Computer%20Programmer




Next I begin the the search. Here I search for all tags=JobSearchResults

$attr = $xml->xpath('//JobSearchResult'); //Search for all tags JobSearchResults and put them in an Array

This data is put into an array called $attr (for attributes) I made this up and it can be anything that you want. Once the data is there I can begin getting the info that I need.

$count = count($attr); //Count the number of results and use that in the for loop

for($i=0; $i<$count; $i++){ echo $attr[$i]->Company; //Access the data in the JobSearchResults/Company tag
echo $attr[$i]->JobTitle /Access the data in the JobSearchResults/JobTitle
echo $attr[$i]->Pay
echo $attr[$i]->Location

}

Well I hope that this information is helpful for someone. Here is the output from the parse.



Feb 18, 2010

Proxy-pass through

I decided to write a blog about the Proxy-pass through because I know that it can be a little difficult to understand the first time you look at it. I think the most important thing to know is that it really isn't too complicated. Based on Dr. Drakes PowerPoint I will explain some steps to follow.

To get started create a basic html web page like this. Remember to include the script type in the head. This is shown in slide 5 in Dr. Drakes PowerPoint. Just remember that for iteration 2 we will need to have an event handler like a button onClick and a div for display.


Next create a JavaScript file and name it ajax and save it as a js file. All you have to do is copy slides 6,7, and 8 in Dr. Drakes PowerPoint like below. Note that for iteration 2 we won't use the window.onload because we will have an event handler. You will have to change the function displayDetails beacuse we can't use an alert box. Also, in the function getDetails is where we call the proxy file.


Then all you need to do is create one more file and name it proxy and save it as a php document. Just copy slides 9-13 in Dr. Drakes PowerPoint. In the highlighted area is where we will define a hostname. For iteration 2 we will need to insert our own hostname from a web service we have chosen. Some API's will require a KEY and some parameters along with the hostname.



This is only a way to call on a web service using Proxy-pass through and to return the information as xml. This is the requirement for iteration 2. As we continue in the semester we will take this xml and parse the information to display only the tags we need. I hope this will be helpful and make iteration 2 a little easier to complete.

Jan 12, 2010

Information Overload on Ajax


My first question is what this “ajax” is we are supposed to be learning in this class. So I decided to make my first blog on what I found out about Ajax on the web.

Ajax is short for asynchronous JavaScript and XML according to Wikipedia. This not so new language is responsible for the mixed content pages we have gotten used to seeing from Google to Weather.com. It works by using retrieving data via XMLHTTPRequest object. Hmmm.. What is that and how does it work?

An XHR or XMLHTTPRequest is a domain object model application programming interface. Now I have worked with API in a sense of using calling routines when I have written programs. XHR works by bringing more data up in to the webpage as the user interacts with the page. That is really cool and to believe that developers for Outlook actually had a good idea.

There was a nice historical flow on the development of XHR with the different browser programs each trying to have the best version. Then XHR Wikipedia site got in to the hard core coding examples that my brain wasn’t going to understand at Midnight. I will be back to this site and many of the others I ran in to just by typing ajax in the google search. I hit information overload at this point.

Nov 18, 2009

Parsing Errors

While parsing our xml data to display it, our script sometimes used to stop working. This inpredictable behavior was very strange and it was difficult to find the problem. The error occured at the Ebay part. First we thought that it might be a problem with the Ebay server. Monitoring the parsing gave us only a small hint.
It must be a problem with the picture URL from the articles.
Working through the Ebay-response, we realized that tags "title", "currentPrice", "url" are mandatory. Some tags, like the "galleryURL" are not.

<item>
<title>Best 5 LED Bike Bicycle Tail Light Lamp &amp; Bike Holder</title>
<galleryURL>http://thumbs4.ebaystatic.com/pict/3204463049958080_1.jpg</galleryURL>
<viewItemURL>http://cgi.ebay.com/Best-5-LED-Bike-Bicycle-Tail-Light-Lamp-Bike-Holder_W0QQitemZ320446304995QQcmdZViewItemQQptZCycling_Parts_Accessories?hash=item4a9c1692e3</viewItemURL>
...
</item>
For parsing, we use this code:
var text="<table border='1'>";
for (var j=0;j<xmlData.getElementsByTagName("title").length;j++){
//loop for iterating over the item's title, URL, pic, and price
text=text+"<tr><td><a href='"+
//starting a new table row and starting the link
xmlData.getElementsByTagName("viewItemURL")[j].childNodes[0].nodeValue+
//gets the first URL
"'target=_blank>"+xmlData.getElementsByTagName("title")[j].childNodes[0].nodeValue+"</a></td><br />"+
// gets the first Title
"<td><img src="+xmlData.getElementsByTagName("galleryURL")[j].childNodes[0].nodeValue++" /></td>"+
// gets the first pictureURL
"<td>"+xmlData.getElementsByTagName("currentPrice")[j].childNodes[0].nodeValue+"</td></tr>";}
// gets the first Price, loop starts again, or ends.
text=text+"</table>";
// closing HTML table-tag
appDiv=document.getElementById("itemContent");
appDiv.innerHTML=text;

Today we implemented yahoo shopping. While parsing the shopping articles with this code:
for (i=0;i<Json_data.length;i++)
{
textYa=textYa+"<tr><td><a href='"+
//new tablerow, starting link
Json_data[i].Offer.Url+
//getting URL
"'>"+Json_data[i].Offer.ProductName+"</a></td>"
//getting Product Name
+"<td><img src="+Json_data[i].Offer.Thumbnail.Url+" />
//getting pic URL
</td><td>"+Json_data[i].Offer.Price+"</td></tr>";
//getting price
;}
}
textYa=textYa+"</table>";
YahooDiv=document.getElementById("yelpContent");
YahooDiv.innerHTML=textYa;
We had the same problems with this yahoo like we have with Ebay. We discovered that Yahoo Shopping returns not only "Offers" but "Catalogs" with not usable data. Now, before we try to access the data, we check for an Offers-object. If true, we work on. If not, we skip it. Checking for a picture URL works the same way. If it is undefined, we return only a blank.
With yahoo-shopping, we are working on the objects, things are not that easy with Ebay. As we use the getElementsByTagName()-function, we pick out all tags with the given tag-name.
This means that there are no gaps in the galleryURL. If one item does not have an galleryURL, the system takes the next one. In the case that, at least, one article has no galleryURL, the array of Elements is shorter than the array with the title-tags, causing the exception.
The workaround at the moment is to check for access to undefined variables and to avoid this by returning an empty string. It adds entries in the end of the array, if needed.

Maybe one of you has a good idea how to fix this bug. My idea, taking the item-tags and moving through the childs does not work...

Update:
Now, I got things working. Using try-catch parts allows us to insert a galleryURL everytime an exception is catched.
try {
if (items[no].getElementsByTagName("galleryURL")[0].childNodes[0].nodeValue==undefined) {
gal_Url=" ";} else {
gal_Url=items[no].getElementsByTagName("galleryURL")[0].childNodes[0].nodeValue;
}
}
catch (e) {
gal_Url=" ";
}


Code, written in a try-block is treated specially. If an error appears, the compiler does not stop the script, but it goes to the catch-part of the code. An additional option is to use a finally-part to perform some clean-up works that should be performed in the error or in the usual case.

Nov 15, 2009

Group Iteration 2

Well we made it through iteration 2 and are now working to complete iteration 3. For iteration 2 I used buttons with onClick event handlers to make requests using the proxy pass through to retrieve web services for the Wine.com API and the Blog API. Also, I incorporated the Google Map into our web page.


Right now I am still working on taking this raw xml data and tyring to parse it into useful information. I am having a little difficullty figuring out some of this. We also need to add in one more API into our page.

Many of the class had great suggestions for adding another API. I am completely open to all suggestions and advice. Hopefully this will all come together over the next four weeks and the project will be ready for iteration 3.

Nov 14, 2009

Oxygen XML Editor 11.0

While working with XML files, received from a server, I was looking for a tool to help me. Until now I worked with around nine different APIs and their XML responses. After the implementation of ebay, my third API, I desperatly wished a good XML tool.

After some research I found the Oxygen XML Editor. This editor features syntax highlighting, debugging, validation, xpath to address seperate tags,xquery to search the document and much more nice features.



This tools makes it easier to read xml-files. You can copy the content from your server's response into a new xml document. With one click, the xml will be organzied that you can read it. Marking a tag and just clicking on "copy xpath", you will get the exact path, with childnode and so on. The best thing, you can use it 30 days for free!
Update: Sorry, I forgot. Here is the link!

Nov 11, 2009

Parsing the XML/RSS in JavaScript

After reading James' post about how he used the simplexml method of PHP to parse through his XML results, I decided to explain how I used JavaScript to parse my results. It is kind of complex, but I don't know any easier way to do it. Basically what you have to do is get the whole RSS response back from the API you call and then you have to look at it and decide what you want to get out of it. Once you know what you want (i.e. the title or the author or whatever it is) you then need to write the JavaScript code that will get it automatically every time the API is called with your AJAX. In the example I did with the class blog that counts everyone's blogs and comments, I needed to get the author names back from the blogger API. To do this in JavaScript I did the normal AJAX stuff like Professor Drake showed us in class to get the RSS response from the blogger API, but instead of just placing the results into a div on the page somewhere or in an alert box, I took those results and parsed through them to get just the author names out of it. To do that, I just passed the request into a function like this:



This takes the request.responseXML and passes it to a function I created called readBlogFeed. In that function I executed the following code (which is the tricky part):



Now I had a lot more code doing other things in my actual project, but basically the getElementsByTagName and the whole childNodes stuff is what you need. Professor Drake's stuff was done the same way with all the childNode stuff, so look at that too for more help. What you'll get in result from this function is a list of all the names of authors from the blog site and the "return" command will send those names back to the original function that called it, which is where I said:



So now those results will be placed in the div I called "blogNames" on my main HTML page. Like I said, it gets a lot more complex, but this should give you some idea of how to get the information you want from your RSS response from the API you are using in your own project(s).

You can see the results I talked about in more detail in my other posts on my people account Website at http://people.emich.edu/mmager/449/.

Hope this helps!

GROUP ITERATION 2

Well, up to this point I think we are on the right track. I am sure we could do better but this is what we came up with so far. We have incorporated Google maps, and also two other proxy servers. One of them gives us an RSS feed with concerts all around the country called 5gig and the other gives us the rating of a particular band or song called billboard. I think our idea is pretty cool, but it seems very hard to implement due to limited knowledge in Java.

The best part about our project is that 5gig gives us already the latitude and altitude of the particular concert locations. The problem is that we do not know how to tide them together. The problem we are facing is that we are pulling the full RSS feed from both proxy servers. I did try to follow the book and the professors notes, but still could not figure out how to pull only particular information of the xml file. To be honest the book confuses me more.

So for Iteration 2 it would be nice to have the feeds segmented and have them talk to each other. Hopefully we can have more progress by tonight's presentation. By the way if someone could help us, it would be greatly appreciated. All i can say is that I am trying to follow the book but it definitely slows me down since it confuses me more. I might be the only one who feels that way though.

So here is what we have so far .....



Nov 3, 2009

Iteration 2


Well I had lots of hadacke trying to do different things for the personal project and realized it is quite difficult. I am not sure if anyone felt like me but I definatelly had hard time. So after playing with google maps, youtube, and so on, I realized that I really do not understand what is really happening. That is why I took an easier to comprehend approach by following the professor's RSS example. I knew that I can not use a pop up winndow that displays raw XML but rather I had to format it in a way to display it in my html page. So I have decided to find another blog and pull that information. Well again that was not an easy task.

Finding different blogs was easy, but just to make sure that my proxy file works I had to run it. Well in a lot of cases I was getting errors that I do not have permition to read the file, or file is not accessible. I am not sure if there was a mistake into my proxy php file or just those blogs had some restrictions. Well after a while I found out a blog that is in blogspot.com and also talks about APIs. How cool? The guys who participate in the blog are developers, so it was pretty interesting to find out some new cool ideas.

Well after getting my proxy php file working ok, I had to find a way to figure out how to pull the information I wanted and display it. Well this was the place with the most difficulty. I tryed to use button and different actions, but I never got any feed back from the proxy server. That is why I decided to go with what the profosor had. So I did reference on what he did. The next problem I had was in the names that I had put in into the drop box. I read the XML file and pulled just one name so I can see if my page would work. And it did not!!!! After a while I figured out that the name I was pulling was the actual author host blog. The actual name of the outhors of the blogs were different. So that was lots of hadacke for nothing. Anyways this is what I have so far

http://people.emich.edu/idimov/Iterat2/test1.html

Oct 30, 2009

What is LINQ

The motivation behind LINQ was to address the conceptual and technical difficulties encountered when using databases with .NET programming languages. LINQ means Language INtegrated Query. Microsoft intention was to provide a solution for the problem of object relational mapping, and simplify the interaction between objects and data sources. LINQ glues several worlds together.

LINQ to XML enables an element-centric approach in comparison to the DOM approach. Two classed that the .NET Framework offers are XmlReader and XmlWriter. Expressing queries against XML docments feels more natural than having to write a lot of code with several loop instructions.

Using LINQ to XML:

using System;
using System.Linq;
using System.XML;
using System.XML.Ling;


class Book
{
public string Publisher;
public string Title;
public int Year;

public Book( string title, string publisher, int year)
{

Title = title;
Publisher = publisher;
year = year;
}
}

static class LinqToXML
{
Book [] books = new Book[] {
new Book ("Javascript the Missing Manual", 2008),
new Book (" Python Essential Reference", 2009),
new Book (" Head First Java", 2005),
};

XElement xml = new XElement ("books", from book in books
where book.Year == 2008
select new XElement ("book",
new XAttribute ("title", book.Title),
new XElement ("publisher", book.Publisher)));
console.WriteLine(XML);

}
}

As you can see there aren't any for loops.
http://msdn.microsoft.com/en-us/netframework/aa904594.aspx