Nov 30, 2009

CSS Ideas

I know that Matt just gave some useful tips in his blog about screen resolutions. I thought that I would give you a couple that I used for our group project. The first issue that we had was trying to make rounded corners for our dive so the page didn't look so boxy. There are a number of ways that you can do this and we first started off by using 2 images that sat in the background.

The problem we had is that the images were bigger than we needed them to be and if we wanted to change the width of the div we had to change the picture. The solution to the problem was to create 4 tiny images of rounded corners and create a div for them. There are many websites that will generate the images for you such as RoundedCornr and Spiffy corners. I actually used RoundedCornr to generate pictures for me then used a simple code to create it. The code was almost the same for every image. It just depended on whether it was left or right.


The other problem that we were facing was the page showing up differently when viewing from different monitors. The way that I attempted to solve this problem was instead of just using the width element, I used a min-max property. I found the idea while searching on the internet from the website CSS Tricks. The idea is that your page "shrinks to 780px to accommodates users with 800×600 resolution, with no horizontal scroll and grows to 1260px to accommodates users with 1280×768 resolution and everything in between". The CSS is actually very simple too. The code just requires two lines in your CSS:


Of course I have not been able to try if this works out because right now I only have my laptop working. I will not know until class if this solves the problem.

Google Chrome Coming to Mac

Google Chrome is getting ready to launch a beta version for Mac, according to an article in ComputerWorld. This is not surprising considering the success that Google has had with its Windows version of Chrome. Google states the the beta version "only has 8 bugs left" before it can release the beta version. The downfall of this is that the Mac version will not have as many features as the Windows version. Some of the most popular features that will not be in the beta version are bookmark manager, app mode, task manager, gears, and sync for Mac. There are many other features that will not be available most likely because Google still wants to be able to release their beta version of the browser by the end of the year as planned.

According to the video below by CNET, there are many reasons why some people may not want to try Google Chrome for Mac. Keep in mind, this review was done back in June when all they had was a rough developer version of the browser.



I don't use a Mac and never have so I can't say whether or not this will be a hit with Mac users. Many I presume may have their favorite browser just like Windows users do. I do think that it will become much more popular if they can offer all of the features that are available for Windows and also ones that are available through competitive Mac browsers. If you would like to be notified when the beta version is released, you can sign up at Google and they will email you when it becomes available.

Interesting Blogger Bugs/Issues

Revisiting my blog counting application earlier today, created yet another source of confusion and frustration for me! Yesterday my application was working perfectly and counting all 55 of my comments. I double checked this number by going through every single post and counting by hand to confirm I had 55 as of yesterday.

Today, the number came back as 50 and after adding one more reads 51. Of course, my first assumption was that I messed something up yet again, but then I found an interesting result....

It appears that Blogger is handling some of the comments on certain posts incorrectly. If you go to Ahmed's post, you'll notice that although Blogger claims there is 5 comments on this post, only 2 show up! I immediately noticed this, because 2 of those 5 comments USED TO BE my comments.




I wanted to make everyone aware of this problem that Blogger seems to be having with comments. Yesterday, my comments showed on Ahmed's post, but today they're mysteriously missing.

Professor Drake, I think I speak for everyone when I say this: Please consider this bug when grading our comments, as this could effect some peoples' comments number! Thanks!


Update:

The comments from Ahmed's post appear to be showing up again, so I don't know if this bug was temporary or is something that will occur again...none-the-less, it's something to be aware of.


Update 2:

Checked it again, and now it's returning the correct number, and all comments are being shown...Blogger is very shaky with the comments.

Update 3:

The post immediately following Ahmed's post is giving the same results. The "Best of Both Worlds" post that Justin made has 4 comments, but occassionally only 2 are shown. It appears to be only the first and last comments for each posting.

Personal Project Finished

Last night I wrapped up my work on my individual project. While I am happy that it is finished, I am disappointed that I could not get my site functioning as I originally planned. As a refresher, my site is for a tree care company and I was designing an application for users to submit a bid request. As it stands now, I pull data from the Weather Underground API and Google Calendar RSS feed and display it. The page automatically displays todays weather and the most recent event on the calendar. This would be the first thing that I would change if I had more time to work on the project. I would use a javascript calendar to dynamically display a calendar to my user so they can pick a specific date they wanted to have a bid on.

Another shortcoming for my application is how the data is displayed. I was hoping to throw the results from my API's into a equation that would display an image showing the likelihood that the bid would happen on the requested day. For example, if the calendar showed 9 appointments and the weather was forecasted as rainy, the application would return a sad face and tell the user to select another day. If it was going to be sunny and there were only 3 appointments for that day, the program would return a happy face.

Finally, I am also unhappy with the functionality of the calendar as I have implemented it. Right now I am iterating over the RSS feed from the calendar. What I needed to do was use the GCal data API to be able to download and upload data to the calendar. This requires the use of authentication which didn't appeal to me for usage by customers that wouldn't have access to the calendar.

Part of why I settled with my project as it is now is because the owner of the tree service company, a.k.a. my father, showed a lack of interest in the application. To him, he didn't like the idea of customers telling him when to do a bid. He would much prefer them send him an email telling them what they were interested in and then he would set the final date for the bid. I'll have to rethink this system to better match his business needs before spending a lot of time developing an application that would never be used!

Nov 29, 2009

Vanishing in the Digial Age

When I read the story of Wired writer Evan Ratliff's attempt to alter his identity and hide away for a month, I was shocked at how much information his pursuant's were able to conjure up. For those of you unfamiliar with the story, Wired challenged writer Evan Ratliff to change his identity and offered a $5,000 reward for anyone that could find him, take his picture, and say the secret code "fluke". The magazines editor would place a crumb trail online of information that a detective would be able to dig up on a person i.e. bank accounts, credit transactions, social media accounts, and phone records. From this the pursuant's would try to piece together his new identity and whereabouts.

The crazy stuff people eventually tracked down about him online included: all of his previous addresses in the U.S., detailed information about his entire family, his childhood nicknames, his cats names, his favorite mechanic and authors, that he had celiac disease, his signature on a deed to his apartment in New York, and his purchasing habits. In fact a lot of this information is posted about all of us all over the web. Don't believe me? Google your name and see what comes up. For those of you that are unlucky like me and have a uncommon name, you're a pretty easy target. Sites such as www.123people.com and www.isthisyour.name scour the internet collecting information about whomever they can find. I'm sure i'll be getting Google Alerts for this blog posting in a matter of months. They are still showing up for my blog posts in Bud Gibson's (link is cached by Google) class a few years ago...

Getting back to Ratliffs story, it didn't take long for a devoted group to set out to track Ratliff down. One individual setup a web-forum where other searchers could share clues and ideas. A twitter tag was created and trackers could tweet their latest trails and data. Another person created a facebook application to try to track down Ratliff. Smart enough to know that he could use this to his advantage, Ratliff frequented the sites to see if anyone was hot on his trail or off following the diversions he created. He had a close call in Atlanta where, upon landing at the airport, he checked the twitter account and realized people where already there looking for him. Eventually his traffic to these sites would be his demise. The facebook application logged users IP addresses and the developer eventually tied Ratliff and his false identity attached to the IP address.

So did Ratliff manage to vanish in a digital age? I recommend reading the article to find out exactly what happened to him. What I will say is that it would be interesting to see what the results would be if someone kept a lower profile then he did (which was already pretty low). He did a good job avoiding traceable transactions by using cash and Visa gift cards as credit cards. But you can only last so long on cash alone. Another issue he had was the use of I.D. To really succeed at disappearing, you would need to find a way to totally create a new identity complete with new I.D.'s or just move to some remote place in the mountains.

Adding Multiple Google Map Markers into an Array

Some people may ask why you would want to add Google Map markers into an array and I would not have a good answer. Today, however is a new day and for me I wanted to be able to add multiple markers on a map and then remove specific markers with a click of a button.

You could use the command MyMap.clearOverlays() to clear all markers at one time, but this would hinder you if you want to remove only certain markers. To remove certain markers all you need to do is add your markers into an array and then refer to that marker in the array using the command MyMap.removeOverlay(markersArray[i]). Although this looks foreign now, I will take you through adding markers and then removing them.


The first thing you want to do is add a marker into an array. Below is the code







Code Broken Down:
var markersArray = [];
This statement creates an array and then initializes the array with the = []. This will need to be placed outside any functions in order to create a global variable

var point = new GLatLng(42.251012,-83.625011);
var marker = new GMarker(point);
map.addOverlay(marker);
Adds a marker to an exact point on a google map. This code is actually a repeat of the code shown in my previous post Adding a Marker to a Google Map. Please review this post if you need a refresher.

markersArray.push(marker);
This statements pushes your added marker into thearray so you can reference it at a later time

For me I wanted to remove a marker from a google map. Now that I added it to my array all I need to do to reference this marker when I want to remove it from a google maps using the following command.
MyMap.removeOverlay(markersArray[i])

As an added bonus, If you want to see how many markers are in your array you can use the command
alert(markersArray.length);


I have incorporated this into my group project and will show the functionality at work during our presentation.

Enjoy!!

Quick and Easy JavaScript Reference

Many of you like myself may not be as well referenced in the javascript syntax as we would like to be. When I started the IS449 class I barely knew any javascript, I had taken IS247 Web Development class, which spoon-fed me some javascript. But I personally tend not to learn programming well by simply re-typing someone else’s commands and code. I prefer the approach of learning of commands and then utilizing those commands in my own programs, but for this courses projects I needed to be able to use a fair amount of javascript.

I struggled at first; basically I have programmed other languages so I would guesstimate different commands, only to end up frustrated when it wouldn’t work. Then I would search our book, or and my IS247 HTML book or the web for snippets of code for what I was trying to do. I would find commands some that would help, and some pieces of code that were rather complex, or potentially written poorly, or were just hard to follow, further adding to my frustration.

Then I decided I would head over to the local bookstore and see what I could find. Well going there you will find a small mountain of java and javascript reference books, all with a wealth of information, but many were huge, and would take a long time to find what I was looking for, and at $50-100+ each it is hard to blindly just pick one. Then I came across this small handy book:



ISBN: 978-0-672-32880-0

It was only $20 and seems to have a lot of code segments with brief explanations of what the code was doing and how it worked. I decided to give it a try since it was so affordable, and I have not regretted that decision. It is a handy reference for a lot of simply tasks with javascript, but any of us should be able to easily follow the code and expand on it for our needs. This is in no way an end-all be-all book on javascript, be is certainly worth the small investment to help make our lives with the class projects a little easier.

Now I have been told you can get the book for cheaper from here.


Additional refernce on the book can be found at: http://www.informit.com/store/product.aspx?isbn=9780672328800

Nov 27, 2009

What? A Google OS?

Google announced their new project to build a new kind of Operating System in July of this year. Their Chrome OS was unveiled on November 24, 2009 as a sneak peak but it will be at least year until it's officially released. I had no idea this was going on! With the two power houses, Microsoft and Apple, I'm impressed that Google is coming into the OS world to compete with them. Are they biting off more than they can chew? This remains to be seen. Here are some of the highlights:
  • Open source

  • Available to netbook users (initially)

  • Speedy (light weight)

  • Secure (redesigned underlying security architecture)

  • Web applications


Google is currently working with OEMs to bring netbooks to the market next year. "All web-based applications will automatically work and new applications can be written using your favorite web technologies." (from the official Google Blog). Given the applications are available via the web makes this an attractive feature as it eliminates the need to install and manage applications on a local computer. I recently rebuilt my computer (twice in 3 months) so this concept is quite appealing to me. Google's goal is to create a product that is focused on internet users who are looking for speed and security. I'm extremely curious how this OS will evolve. It will be fun to watch.

Nov 26, 2009

Mobile Coupons





Coupons have always annoyed me. I love to save money as much as the next person, but not at the expense of carrying them in my wallet or back pocket.Thanks to Google, these coupons are now added to their search results and can be shown to clerks to receive the savings.

If a business adds a mobile coupon to their local business listing on Google, the coupon will appear in a Google search on the Places page. With so many individuals owning smart phones I think its a win win for everyone.

I love the fact that companies are bring new ideas to the table. The power of the internet is bigger and can't imagine what's next It will take a little for savvy businesses to catch on but when it does, a simple Google search as you head to check out might save you some money.

Nov 25, 2009

Another JavaScript Tool

A lot of us have already blogged about a bunch of different JavaScript libraries and tools, but I found another tool which looks quite intriguing and I thought I'd blog about it to share my findings with the rest of you. After speaking with quite a few people from our class who have very limited or no prior experience with JavaScript, I decided to look into more helpful tools. A few different guys have told me they have found useful books teaching JavaScript which have helped them learn it better, and while the best way to learn it is to read up on it and keep practicing it, hopefully this tool, called Blackbird, can be helpful as well.

Blackbird is an "Open Source JavaScript Logging Utility" that does just that; logs information regarding your JavaScript. This tool is more beneficial for those of us who already understand JS well enough to write a decent amount of code without much trouble. Essentially what it does is places a small black window on your screen that pops up different messages related to your JS code upon execution. It is the ultimate replacement to the alert() command, allowing you to not only alert things right within the Blackbird interface, but also allowing you to choose what type of an alert it is. It offers options for 'debug,' 'info,' 'warn,' 'error,' and 'profile.' By using the different options you can quickly see what the status of a message is and even toggle between the different categories to show only certain messages at a time.

To add Blackbird to your page, all you have to do is download a JS and a CSS file from their site, upload them to your server, then include a reference to them in the head of the page you want to use Blackbird on, like in the image below.



Blackbird also has some neat features built into it that allow you to position it in different locations throughout your page as well as allowing you to toggle it on or off. As you can see from the site, it is compatible with nearly all the common browsers, including Internet Explorer 6+, Firefox 2+, Safari 2+, and Opera 9.5. Check it out!

Job Search

Like me, most of us are about to graduate and I am sure that we all want better paid jobs or actually find a job since we will be having a new degree. Well, while I was looking for jobs I got across a really cool job search engine. It is interesting how google is trying to be on top in search engine industry because this tool is powered by them. It is nothing like the famous Monster or Careerbuilder. I could say it is less hassle to work on and actually have better search results.

I am not sure what technology stays behind it, but it pretty much just asks you the type of job city and money that you would like to be paid. From that point the engine searches through different job listing sites and displays them based on the time on which they were posted. For example if you search for an IT job you will get results coming from Dice, Monster, CareerBuilder and even from company's itself.

I think this tool is pretty cool because it takes away the hassle of creating an account and spend time in registering. In addition it pulls listings from multiple resources so you do not have be registered in multiple places. The name of the site is indeed. com

Nov 23, 2009

Screen sizes & resolutions

On a similar note to my last post about W3C's validation standards on the Web, I decided to talk a little bit about different screen sizes and resolutions. Designing layouts for the Web is always tricky because of the many different screen sizes and resolutions people use out there today. I'm sure Professor Drake and I aren't the only ones who would love to have their 42+ inch flat-panel TV double as a huge computer screen! Many people would enjoy this, and some people have already found ways to do it, including DVI, VGA, AV, and Component cables. The point is, even though the majority of people don't browse the Internet on their TV screen, the vast amount of different sized screens with different resolution possibilities is still very large throughout the number of possibilities today. Multiple screens, resolutions, and even different browsers all play a factor into how users are presented with Websites.

While I don't have the ultimate solution to these issues, hopefully I can explain a little bit of how to plan some of your design work so that it looks the same on as many different browsers and screens as possible. As Professor Drake has already emphasized in this class, making projects work with Internet Explorer as well as Firefox is rather crucial. Unfortunately, just because your code may work on both of these browsers, doesn't mean other browsers will display it as you'd expect or want it to. One of the best and easiest ways to ensure clients will see your Website as you want it to be displayed is by testing it thoroughly once you have gotten it to a point where you're satisfied with it. An excellent tool for testing sites in many different browsers is http://browsershots.org/. This site also allows you to specify common settings that an end-user who is browsing your site might have. Options for different screen sizes and color depth, as well as whether or not JavaScript, Java, or Flash are enabled by the user can all be adjusted to test your project. The site contacts multiple virtual machines based on the parameters and browsers you specify then returns screenshots of what your Website looks like within each of those environments. The process can take quite a while sometimes, but it is an excellent tool for testing your work.

Beyond simply relying on that site, there are some things that you should know when coding Websites that will allow you to get results you want on most users' screens. I remember Colin posting something a while ago about a fluid layout, using percentages to position your divs throughout the HTML rather than static measurements. This can be very useful, but can still be a little unpredictable sometimes depending on the browser and operating system being used by the user. Percentages can also be hard to use when displaying a background image as part of your layout. I used to always code with percentages, until I came across these types of problems in positioning my content over my background images and keeping the same look once the screen got resized on or viewed on different sized monitors. An example of this can be seen on Terry's individual project. His layout looks really neat on a standard resolution and non-widescreen monitor, but when you view it on a widescreen, and maximize the browser, you notice that the content adjusts to the size of the browser due to the percentages he has set to them, but the background image remains the same size. Terry, please realize that I'm not criticizing your work in any way, I'm simply trying to show the differences in different screen sizes and the occasional issues that percentages can cause. While the site still works perfectly when the screen is resized, the layout of it doesn't quite line up as neatly as it did when it was the original size. One common approach used by most professionals is the process of wrapping content within other divs in order to position things as needed using CSS, and ensure they display the same way despite user preferences and screen sizes. A simple example showing some text wrapped in a div, within a larger div can be seen throughout EMU's web pages:



The use of, what I'd describe as, "parent divs" or "div wrappers" are really the key to properly positioning elements throughout your HTML. Not only do these processes ensure much more accurate results for the user, but they also ensure proper validation among the W3C standards. One more crucial point to mention regarding sizes and positioning is the use of background images. I've seen a few projects using a background image within the HTML and while it looks great in the native resolution of a certain computer, it doesn't look right when viewed on some other screens that are larger or smaller or have a different resolution. To get the best results with background images, try to find/create images that are big enough to fill a larger monitor, but not too big, that a user with a smaller monitor won't see any of the details of the image. I could go on about this in greater detail, but I'm sure most of you have already stopped reading at this point anyway, so I'll leave it at that! :) I hope this post is somewhat informative and makes sense to those who read it. The idea of different resolutions and screen sizes is an important issue to be aware of when creating Websites, and the more you test your work on different environments, the less likely you will be to have dissatisfied visitors to your sites.

Nov 22, 2009

HTML W3C Validation

I don't think Professor Drake mentioned anything about grading us on official W3C Validation Standards, but I thought it was something worth mentioning. For those who don't know, W3C sets standards for all Web coding depending on the DocType of your HTML document. They offer a service that let's you validate your HTML markup (or one that validates your CSS). As explained on the website, the validator "checks the markup validity of Web documents in HTML, XHTML, SMIL, MathML, etc. If you wish to validate specific content such as RSS/Atom feeds or CSS stylesheets, MobileOK content, or to find broken links, there are other validators and tools available." Many Web Development companies these days require that employees follow the standards explicitly. These standards are revised from time to time and vary based upon the DocType set at the top of the HTML document. Using 1.0 Strict is the most up-to-date and strictest standard set you can use. By following these standards, you can ensure that your site works across all browsers.

While browsing through some of the projects that different groups and individuals have been working on so far in our class, I noticed that most of them are not currently validating with the standards. For starters, a DocType must be declared in order for the validator service to even know what to validate. This other site called htmlhelp.com is great for deciding which DocType is best for your application. I personally like to use the most up-to-date strictest standards, but I guess you could technically use other versions, or transitional versions.

One of the only plugins I use via Firefox browser is the Web Developer add-on. This add-on adds a small toolbar across the top of the browser window that allows for a huge variety of developing tools, including the abililty to validate any page you're on through the W3C validation service. By selecting the Tools menu and choosing the "Validate HTML" option, a new tab/window is opened that takes the current page you're on and checks it according the DocType declared.




To read more about Web standards, check out W3C's articles, including this one about the definition of my DocType preference, strict 1.0. Best of luck to everyone!

Inserting multiple lines of text with innerHTML

I know there have been several posts about the DOM but I found a fabulous site that, in my opinion, is better than the W3Schools stuff. I have been trying to figure out how to display multiple Events using innerHTML with no success. Every time I looped through the Events all I would end up with is the last one. I thought about using the createElement, createTextNode and appendChild methods but that would involve re-thinking things and It was just not real clear how that all works.

I have looked for better explanations about some of these functions, especially innerHTML. Till I stumbled on this site it seemed like I wasn’t going to get it to work and would end up createElement, createTextNode and appendChild to displayed event information.

This is what I did to get innerHTML to work with multiple Events:

for (i =0; i < something.length; i++) // loop
{
// initialcontent stores the initial text in my "eventDetails" DIV
var initialcontent = document.getElementById("eventDetails").innerHTML;

// I put some text into the evnetDetails DIV
var state = xmlDoc.getElementsByTagName('state')[i].firstChild.nodeValue;
document.getElementById("eventDetails").innerHTML = "State: " + state;

// finalcontent stores the information I just put in eventDetails DIV
var finalcontent = document.getElementById("eventDetails").innerHTML;

// I use innerHTML to display initialcontent + finalcontent
document.getElementById("eventDetails").innerHTML = initialcontent + finalcontent;
}
// end for loop

I have run accross quite a few comments stating that we shouldn't use innerHTML.

I may try and use createElement, createTextNode and appendChild and what ever else I need to displayed event information.

This site has excellent explainations and examples on how use, from what I can tell, all the DOM functions.

Sending a text string with %20 to your Proxy Pass-Through

I thought I had everything working after my last post about “Building your Proxy Pass-Through URL”. As it turns out everything works fine till you have, in my case, a space between City spellings. I remembered seeing a post about that problem so I logged in and found Jassin's post about Parsing Errors. That gave me a clue about what I needed to do to make my stuff work but I really didn’t want to change my PHP and it looked like a lot of work to replace spaces with %20 in PHP. This is what I did instead:

I used a switch statement because I needed to map our Trail Map City to the Active.com City and define an event_City variable depending on which city is selected.

switch(event_City)
{
case 'tallahassee':
var event_City = "Tallahassee" + "%20" + "-" + "%20" + "Thomasville";
no_event_City = "Tallahassee - Thomasville";
break;
}


You can see that I took Jassin's advice and used it to build my City strings with %20 instead of spaces. However after doing that I still couldn’t get it to work and, it seemed to give me the same results as passing my city string in with spaces.

After playing with that for a while I decided to try something Josh used with the Yahoo Weather API. This function, encodeURIComponent(), encodes special characters as well as these characters: , / ? : @ & = + $ #.

I really wasn’t sure if this would work because it seemed that I would need to decode my string on the server end. Well, you can believe that I was not only surprised but extremely happy when I used encodeURIComponent(event_City) and it worked!!! I didn’t have to change anything with my PHP proxy pass-through.


This is my original url: var url = 'active.php?path=' + event_City;


And, this is my new url: var url = 'active.php?path=' + encodeURIComponent(event_City);

So all I really needed to do to get this to work was to build the string with %20 instead of spaces and send it to my PHP with encodeURIComponent(event_City).

Nov 19, 2009

IE 9 is on the Horizon

While a launch date hasn't been set, IE 9 is on the way. Microsoft developers started work on it a little over three weeks ago. I must admit I've become a Firefox user myself because of the lack of performance on IE 8. Interestingly, Microsoft is boasting that IE 9 comes with a "serious performance boost". Also, "according to Microsoft figures, an early build of IE9 already scores four times as high on the Acid3 benchmark for Web 2.0 applications" (on Techradar). IE 9's performance gains will rely on PC hardware. It's the first browser to do this (render pictures, videos, etc) using hardware acceleration.

Here's a list of highlighted changes:
  • Performance (using something called Direct2D system to improve client-side rendering


  • Richer web support (more rounded corners available via css3)


  • Support for HTML5

  • Faster Java Script engine

While Microsoft boasts IE9 will show performance improvement, it still lags behind the competing browsers. Here's a graphic depicting a benchmark of performance:


It amazes me that Microsoft has been in the browser business for so long and can't compete against a new comer like Google's chrome. It goes to show it's not their niche. I give them credit for trying to stay in the game though and it's high time they improved on IE8 which is almost unusable compared to the other browsers (in my opinion). The graphic above says it all. Given the release of this browser version is at least a year out, Microsoft may throw more bells and whistles in there. They will need to release this version somewhat soon if they want to remain competitive. There are many users of the current IE version who already have a sour taste in their mouth. For more information from a developers view, you can visit http://blogs.msdn.com/ie/archive/2009/11/18/an-early-look-at-ie9-for-developers.aspx

Nov 18, 2009

Parsing Errors

While parsing our xml data to display it, our script sometimes used to stop working. This inpredictable behavior was very strange and it was difficult to find the problem. The error occured at the Ebay part. First we thought that it might be a problem with the Ebay server. Monitoring the parsing gave us only a small hint.
It must be a problem with the picture URL from the articles.
Working through the Ebay-response, we realized that tags "title", "currentPrice", "url" are mandatory. Some tags, like the "galleryURL" are not.

<item>
<title>Best 5 LED Bike Bicycle Tail Light Lamp &amp; Bike Holder</title>
<galleryURL>http://thumbs4.ebaystatic.com/pict/3204463049958080_1.jpg</galleryURL>
<viewItemURL>http://cgi.ebay.com/Best-5-LED-Bike-Bicycle-Tail-Light-Lamp-Bike-Holder_W0QQitemZ320446304995QQcmdZViewItemQQptZCycling_Parts_Accessories?hash=item4a9c1692e3</viewItemURL>
...
</item>
For parsing, we use this code:
var text="<table border='1'>";
for (var j=0;j<xmlData.getElementsByTagName("title").length;j++){
//loop for iterating over the item's title, URL, pic, and price
text=text+"<tr><td><a href='"+
//starting a new table row and starting the link
xmlData.getElementsByTagName("viewItemURL")[j].childNodes[0].nodeValue+
//gets the first URL
"'target=_blank>"+xmlData.getElementsByTagName("title")[j].childNodes[0].nodeValue+"</a></td><br />"+
// gets the first Title
"<td><img src="+xmlData.getElementsByTagName("galleryURL")[j].childNodes[0].nodeValue++" /></td>"+
// gets the first pictureURL
"<td>"+xmlData.getElementsByTagName("currentPrice")[j].childNodes[0].nodeValue+"</td></tr>";}
// gets the first Price, loop starts again, or ends.
text=text+"</table>";
// closing HTML table-tag
appDiv=document.getElementById("itemContent");
appDiv.innerHTML=text;

Today we implemented yahoo shopping. While parsing the shopping articles with this code:
for (i=0;i<Json_data.length;i++)
{
textYa=textYa+"<tr><td><a href='"+
//new tablerow, starting link
Json_data[i].Offer.Url+
//getting URL
"'>"+Json_data[i].Offer.ProductName+"</a></td>"
//getting Product Name
+"<td><img src="+Json_data[i].Offer.Thumbnail.Url+" />
//getting pic URL
</td><td>"+Json_data[i].Offer.Price+"</td></tr>";
//getting price
;}
}
textYa=textYa+"</table>";
YahooDiv=document.getElementById("yelpContent");
YahooDiv.innerHTML=textYa;
We had the same problems with this yahoo like we have with Ebay. We discovered that Yahoo Shopping returns not only "Offers" but "Catalogs" with not usable data. Now, before we try to access the data, we check for an Offers-object. If true, we work on. If not, we skip it. Checking for a picture URL works the same way. If it is undefined, we return only a blank.
With yahoo-shopping, we are working on the objects, things are not that easy with Ebay. As we use the getElementsByTagName()-function, we pick out all tags with the given tag-name.
This means that there are no gaps in the galleryURL. If one item does not have an galleryURL, the system takes the next one. In the case that, at least, one article has no galleryURL, the array of Elements is shorter than the array with the title-tags, causing the exception.
The workaround at the moment is to check for access to undefined variables and to avoid this by returning an empty string. It adds entries in the end of the array, if needed.

Maybe one of you has a good idea how to fix this bug. My idea, taking the item-tags and moving through the childs does not work...

Update:
Now, I got things working. Using try-catch parts allows us to insert a galleryURL everytime an exception is catched.
try {
if (items[no].getElementsByTagName("galleryURL")[0].childNodes[0].nodeValue==undefined) {
gal_Url=" ";} else {
gal_Url=items[no].getElementsByTagName("galleryURL")[0].childNodes[0].nodeValue;
}
}
catch (e) {
gal_Url=" ";
}


Code, written in a try-block is treated specially. If an error appears, the compiler does not stop the script, but it goes to the catch-part of the code. An additional option is to use a finally-part to perform some clean-up works that should be performed in the error or in the usual case.

JSON TreeView

While receiving JSON data its a bit difficult to see the structure. I think you learned that I am very lazy and that I like using tools. I started looking at different tools, for example Charles Proxy. Charles Proxy just copied the received JSON or XML data. With this copy, you can play a bit in Charles. Unfortunately, I was not able to get it formatted into a tree-structure.

Looking for something different, I found a HTML site that displays the JSON data in treestructure.
With this tree structure you can easily navigate through your data. The website shows you the JSON Path, the value of the variable as well as the type of it.




If you need the tool offline, no problem. Here is the link to a zip, containing an HTML file and the scripts. I hope this will make all our life easier. And here is the link!

Wolfram|Alpha

I'm not sure how many people have heard of Wolfram|Alpha, but basically it is a search engine that actually calls itself an answer engine. According to Wikipedia it is

"an online service that answers factual queries directly by computing the answer from structured data, rather than providing a list of documents or web pages that might contain the answer as a search engine would".

They don't actually search for stuff that may be stored somewhere else on the Internet, but finds information and makes calculations. A simple example would be to enter your birthdate. As you can see it gives quite a bit of calculated information on the date you were born.


The biggest news from Wolfram|Alpha is that they recently released an API for developers. The way it works is you would send a web request using the same search query you would enter into search box in their website. It will return the same results as well. The are ways that you can manipulate the results so they are useful to the information that you want. The downfall of the API is that it will cost you to use it. You can take a look at the Wolfram API site for more information.

Python Filemaker and XML

Filemaker Pro was considered a clever tool for making small in house database applications. The application was brought to my attention while talking to a guy about his database problems. I found that MySQL could be used as a backend for the application by using either Filemaker's ESS technology or by using a ODBC driver to connect to MySQL.

I had ran across some code that provided me with the means to use python as a wrapper so that we can put a trigger in the Filemaker database which would serve as a web service to update the MySQL backend. http://www.lfd.uci.edu/~gohlke/code/fmkr.py.html

Filemaker's XML approach to publishing XML data relies on specifying database queries in URLs or HTML forms using a proprietary language called CDML (Claris Dynamic Markup Language). The XML data is returned in the form of CSS or XSl style sheets. The data can be transformed into HTML and manipulated on a client's browser for display to a user or can be transformed into a format suitable for use by another XML aware application or server. The browser does all the tricky formatting and manipulation of data. For smaller data sets it can be more effiicient to move all the data to the client as XML, then use java or javascript to perform all the manipulations on the client side.

Applescript or Perl can be used also, as ameans to assist Filemaker becoming more efficiently export data.
While searching the web, I found a couple new interesting app’s for the iPhone. The Atlantis blasting off to the space station was not the only interesting thing coming out of NASA. Along with partners, NASA has developed three new app’s for the iPhone, two of the being free.

http://www.cnn.com/2009/TECH/11/17/apps.week.space/index.html

The first free app is the official app of the U.S. space agency. It lets you track the position of the space station along with other spacecraft orbiting the Earth.
The second free app lets you view the “Astronomy Picture of the Day”. Every day a new picture is posted on the app, whether it be a galaxy or other astronomical feature.





The third app, costing $2.99 uses the phone’s gps capability to guide you through each night’s sky. You can also obtain information on the moon phase and lets you view astronomy news and offers a quiz.
I know it does not pertain to the class, I just though it to be interesting.

Iteration 3

Some of you probably already know, but the professor have decided to go easy on us, by changing the requirements for Iteration 3. In his last email he has sent the new agenda what we need to do. The two main items were to clean up the RSS feed so it has a clean and meaningful output or pull another web service. It seems that most of the class is already done for Iteration 3 since they have completed those requirements in Iteration 2.

Actually these new requirement will definitely help me since I am not so sure what I was going to do for my Iteration 3. I was worried about it, because I got lots of help from the professors examples in class. So here is what I have so far...







So for iteration 3 I might pull different information from the RSS feed, or maybe incorporate another feed from another blog service, by pulling only the time when the blog was created or maybe pulling the author's profiles. I really want to do something more useful so if any of you have any tips, please let me know.

Nov 17, 2009

Adding a marker to a Google map

In my last post I detailed the steps of geocoding a given address to a set of coordinates. Using these coordinates you are able to map a point on a Google map and place a marker at that exact location.

The first thing we have to do is add a Google map to our web page, if you are unfamiliar with this step please refer back to my post How to add a basic Google map API to your website. Your basic Google maps code without a marker should look similar to this.




The next step in the process is to add the point to your map with a marker. Here is the code to do that.
var point = new GLatLng(42.251012,-83.625011);
var marker = new GMarker(point);
map.addOverlay(marker);
GEvent.addListener(marker, "click", function() { alert ("Hello World"); });


Code Broken Down
var point = new GLatLng(42.251012,-83.625011);
This line of code creates a variable called "point", and sets the exact point where you want to place the marker on your map

var marker = new GMarker(point);
Here you are creating a variable called "marker", and then preparing google maps to use the coordinates of your "point" in this marker

map.addOverlay(marker);
This adds the actual marker to the map

GEvent.addListener(marker, "click", function() { alert ("Hello World"); });
As an added bonus, you can add an onclick event handler to the marker and call a function that displays an alert

You can find the final product here, don't foget to click the marker :)

This is a very basic way of adding a point to a Google map, there are many ways to utilize markers into your website. Yelp uses markers to mark business locations, this website uses markers to dynamically show the coordinates of your selected point of the map. I look forward to seeing your creativity at work.

Enjoy!!

Follow up- Azure Cloud Computing


I have been posting about cloud computing in some of my recent blogs. I have talked about the MySQL new feature that allows the database users to use a more dynamic language that is transferable among the in-cloud database and so on. I have then come across the introduction of Azure in which i mentioned it would be soon. Well, it's official now; Microsoft Azure will launch on January first of the coming year 2010. Furthermore, I continued to talk about one of the largest database centers which is located in Chicago. We have come across the brilliant mechanism behind storing the data in servers which then are kept in containers/trailers; this helps cut operating, overhead, space, and time costs. At a very recent live blog in California , LA particularly, Microsoft chief Software Architect Ray Ozzie talked about a number of plans that have come into live this year and ones that are going to be launched next year. In the following paragraphs, I'm going to sum up the main points mentioned in the conference about Microsoft Azure and its updates/features.

The first thing to mention is the location of the cloud servers/databases; According to Ozzie, the plan is to run azure in two centers in each region. For example, in the US, Azure is going to run in facilities located in Chicago and San Antonio. In Europe, Azure is going to run in facilities located in Dublin and Amesterdam. In Asia, the system will run in facilities located in Singapore and Hong Kong. Coming back again to the new approach Microsoft is taking with regards to housing its server/data. It is moving servers from rack into containers. For further information on this housing mechanism, click here. Second, Ozzie has mentioned an Azure subsystem called Dallas. It is an open data marketplace that is both public and commercial. The idea behind this system is the mixing of public and commercial data. Ozzie also highlighted the early partners/customers of the Azure cloud system. NASA was one of the customers who has already been using Azure and Dallas. The fact that NASA is using azure can help boost customer confidence and sales of this product.

After Ozzie concluded his presentation, Muglia, the president of Microsoft's Server and Tools Business, took over and directly shifted into a broader approach stating that cloud computing is not only an infrastructure but also an application model. Muglia then announced project Sydney ; a service that when implemented will allow businesses to connect their servers to the cloud/ Azure servers. I think this would be helpful to businesses knowing that Dallas is incorporated in Azure as well which can give businesses the option to view public or commercial data marketplace. In conclusion, with NASA, InfoUSA, AP online, Kelley Blue Book, and Domino's Pizza already implementing Microsoft Azure and its available services/features, it seems that Microsoft is entering the market with steady feet. this could only mean, good luck to the other competitors.

Nov 16, 2009

Building your Proxy Pass-Through URL

Like some of you, I started working on Iteration 3 which involves adding another API to our group website. I thought I would start off with the “easy” stuff and go from there. I had three goals:

1) Figure out how to iterate through the DOM to get multiple events.
2) Figure out how to display the events.
3) See if there is a way to build a URL with a variable that I can change based on a City.

Well, I started with the first problem and found that using my proxy pass-through I got all the information I needed. The DOM contained everything and if I indexed each element I could see multiple event information. I spent quite a bit of time trying to get that information with a for loop and that part may even work. However, I found when I went to the second part and tried to display the information using the innerHTML property it only displayed the last event. I spent quite a bit of time trying separate DIVs with hard coded indexes just to see if it would work. I never got that to work. After emailing one of my group members (James) suggested another soulution which I have not looked at yet. (Mainly because I am a little frustrated with that) Anyway, I decided to move on the difficult goal (number 3). I was surprised that I got that to work in a short time because that seemed to be the most difficult part of everything I was trying to do.

So, if you want to pass a variable into your php proxy pass-through (build the URL) here is how you can do it:

The first thing I did was to adjust my "active.php" file by hacking off the end of the original URL and making it a varible. It was the key text which I have to supply to use the API. Also, note that the part I want to change is just before the key: dma="the city I want to change" then the key follows with an &api_key=... So, the only thing I changed initially was to hack off the key and make it a varible $key = '&api_key=xxxxxxxxxxxxxx' Then you will notice I just apppended it to the original string which was HOSTNAME + path + key which actually was done with the "." php append charater. Which you can see in my example. Once I knew that worked all I had to do was figure out how to supply the "path" as an input varible. This was much simpler than I thought. I spent a lot of time looking on the Internet but ended up just experimenting and figuring it out.

This is my original php:

define ('HOSTNAME', 'http://api.amp.active.com/assets/cycling?dma=Detroit&api_key=xxxxxxxxxxxxxxxxxxxxxx');


$path = ($_POST['path']) ? $_POST['path'] : $_GET['path'];

$url = HOSTNAME.$path;

This has the HOSTNAME, path and my new "key" varible

define ('HOSTNAME', 'http://api.amp.active.com/assets/cycling?dma=');
$path = ($_POST['path']) ? $_POST['path'] : $_GET['path'];

$key = '&api_key=xxxxxxxxxxxxxxxxxxxxx';

This is where everything gets "appended" to make the URL

$url = HOSTNAME.$path.$key;

Once I realized that I didn't have to do any more with the php script I started playing with the js to see what I needed to do to get the 'path' varible to be the city I wanted to get information about. This is what I ended up doing in my js file.



As you can see I created a varible "event_City" and assigned Detroit to it. Then I add that varible to the url which gets passed in as 'path' and ultimately gets appended to the HOSTNAME and then the key gets appended to that to complete the string. This is what I get when I get events for Detroit:




Now that I have this out of the way all I need to do is figure out the first two "easy" parts and I will be good.

Eclipse

Matt described the NetBeans IDE in one of his blogs. Now, I would like to present Eclipse, another IDE. I used Eclipse for a java software development project last year and decided to use it for my individual project as well.
Eclipse supports a lot of different languages: Java EE (with JPA), Java, JavaScript, C/C++, PHP, RCP, Pulsar and more. You can download Eclipse for free for different operating systems: Windows, Mac Carbon, Mac Cocoa, Linux. There are thousands of plug-ins to enhance Eclipse with features like connecting and using a version control system (for example subversion), databasis or implementing popular servers to project (like JBoss or Tomcat).


Eclipse features different perspectives on your project, optimized on the task you like to perform. If you like to share your files with your team members on a subversion server, you don't need to see the source code and the document structure of your file. You would prefer to see the files on the server and have the possibility to compare your local files with the remote ones. For working on the project, you don't want this information but your source code with syntax highlighting, auto-complete, and live syntax and error-checks. The different perspectives allow you to be focus on your task.
Eclipse supports project works. So it’s easy to work on HTML, CSS and Script files. For my individual project, it really helped me avoiding annoying errors like forgetting the closing bracket or the semi-colon. I forgot to download the ftp-plug-in, but in the moment, I am integrating it. I will post an update after the integration :)
Here is the link for downloading Eclipse

Nov 15, 2009

Group Iteration 2

Well we made it through iteration 2 and are now working to complete iteration 3. For iteration 2 I used buttons with onClick event handlers to make requests using the proxy pass through to retrieve web services for the Wine.com API and the Blog API. Also, I incorporated the Google Map into our web page.


Right now I am still working on taking this raw xml data and tyring to parse it into useful information. I am having a little difficullty figuring out some of this. We also need to add in one more API into our page.

Many of the class had great suggestions for adding another API. I am completely open to all suggestions and advice. Hopefully this will all come together over the next four weeks and the project will be ready for iteration 3.

Nov 14, 2009

Oxygen XML Editor 11.0

While working with XML files, received from a server, I was looking for a tool to help me. Until now I worked with around nine different APIs and their XML responses. After the implementation of ebay, my third API, I desperatly wished a good XML tool.

After some research I found the Oxygen XML Editor. This editor features syntax highlighting, debugging, validation, xpath to address seperate tags,xquery to search the document and much more nice features.



This tools makes it easier to read xml-files. You can copy the content from your server's response into a new xml document. With one click, the xml will be organzied that you can read it. Marking a tag and just clicking on "copy xpath", you will get the exact path, with childnode and so on. The best thing, you can use it 30 days for free!
Update: Sorry, I forgot. Here is the link!

PHP Proxy Pass-Through using several keywords

While working on our group project, we had some problems with our PHP Proxy when we handed over keywords, seperated by spaces. The result of the CURL request was an error that Google Translate was not able to process the request.

To solve this problem, we tried it with escape(variable), or manually by exchanging the space by "%20" in the script. After some research we found out that the proxy interprets the "%20" to space. So we had to find a solution within the proxyfile, and here it is:

$rk = $_REQUEST['rk'];
$token = strtok($rk, " "); //tries to split $rk into several parts
// by cutting everywhere where a space is used as delimiter
$test=""; // the new string that will be created
$test1="%20"; // the new delimiter
while ($token != false) //iteration over the parts
{
$test=$test.$token.$test1; // value assignment to $test
$token = strtok(" "); //next part of $rk will be assigned to $token
}
$rk = $test;



We use the PHP function strtok(string,delimiter) do initialize the splitting process. With strtok(delimiter) we will get the next part of the string. This was the only way that allowed us the replacements of spaces with its entity.
I guess some of you could use this.

Browser Recognition How-to

While working on our group project we encountered some issues when dealing with different browsers, this is fairly commonplace in the word of web development. I know some of you have already stumbled into errors when working with different browsers, and for those of you who haven’t yet, you will most likely sooner rather than later. As we have learned from the “Head First Ajax” book the various browsers available to us and the general public do not all follow the same standards or guidelines. Many have communized among themselves, while some, Microsoft in particular tends to handle certain tasks in their own way.



This is not one browser or another handles events better or worse, it simply came about during their early creation and these companies now have a lot invested in the way their browser operates that it would be quite expensive and difficult for them to suddenly change the ways they have been operating for over 10 years. And this would lead to the argument of whose methods are correct? And lately we are seeing the advent of mobile browsers from even more companies all of which are using their own practices, further complicating the matter.



We as skilled programmers need be prepared for the large majority of browser users and create programs that can adapt to the various browsers with no noticeable different in operation to the end user. But how do we do that? I am going to show you some very simply code to check for the type of browser being used. The javascript command we need is “navigator.appName” command. The navigator command returns to us the browsers identification and the appName gives us the browser name. Note: sometimes the more powerful userAgent command needs to be used, as some browsers return the incorrect name, such as with Safari, it will return “Netscape.” The userAgent command can be a little harder to parse, as it returns the complete browser identification string. You would apply this in conjunction with an IF/Then or case statement command to run selected code that is appropriate for the browser currently being used.



Here is a simply javascript function that simply pops an alert message box telling you what type of browser is currently being used when it is ran.





This check only will respond to "WebTV", any of the netscape browsers, or Internet Explorer, and is not ment to be an exhaustive list, just a small example you can expand on.

I utilized the simply appName command for our group project in relation to the cascading dropboxes, as the dynamic list utilizes an array, and the Firefox browser starts the count at 0, and Internet Explorer starts the count at 1. This was causing an error, and after some research and additions script and adjusting of the script it works well in either browser. It is a little long/difficult to explain, so if you would like a further explanation or more detail feel free to ask me in class.


Here is a working example with the javascript included in the HTML page, simply veiw the sourcecode and feel free to use.

Nov 13, 2009

Bing, a new kind of search engine

As I travel around the internet in search of answers to miscellaneous questions using different search tools, Microsoft's Bing never comes to mind. Last week a colleague commented on it saying they know someone who used a feature/service through Bing that cut their cost in half for a flooring project. After hearing this I decided to look at Bing a little closer.

What is Bing? It's basically a search engine like Yahoo and Google. Why would we need yet another search engine when there are already two great options out there? Well, Microsoft has released Bing as a different kind of search engine. It's being promoted not as a search engine but as a "decision engine". Unlike other search engines, one feature it has is that it provides the ability to preview a site before visiting which can save time (shown below).




What does Bing have to offer that is different from other search tools?

Power Shopping
At the heart of everyone's wallet is savings. Bing offers a cashback service which allows you to search for products and compare prices. If you purchase through Bing they boast of cashback opportunities of up to $2,500 annually. Earnings are typically posted to your account within 60 days of earning the cashback reward. The ability to compare and sort products based on a bottom line price is available.

In addition to the cashback service, a search will provide a user with product reviews, ratings, and prices for each product.






Travel Tools
Microsoft's acquisition of farecast has allowed them to provide a great travel tool with Bing. If you're searching for plane tickets, and you enter "flights from detroit to san francisco" Bing will present you with a "cheap tickets" link that shows fare predictions and a calendar that shows how the price may vary depending on your departure and return dates. If you continue into the Bing Travel interface, you can compare prices from multiple travel sites, and even see the best time of day to schedule your flight to get the lowest rates.

Visual Search (requires the install of Microsoft's Silverlight)
The visual search allows for a "new way to formulate and refine your search queries through imagery". This is an interesting concept so I invite you to check it out. Here's how it looks:





The features above are probably the biggest. There may be others out there I have yet to discover. Are there any "Bingers" out there who might have something to add? I appreciate the clean front page and features so I will check it out and possibly blog on it later.

Here's a cool website which allows you to see the results from both search engines. It's pretty cool http://www.bing-vs-google.com/.

PC World's article, Bing vs Google vs Yahoo: Feature Smackdown, took a look at each and the author prefers Google. Bing appears to be a good competitor from what I've seen.

Nov 12, 2009

Blogger API Limitations :(




Well folks, it deeply saddens me to announce that the blog and comment counter I spent so much time and brain power on is not going to remain useful very much longer! :( Apparently the Blogger API has a limit of 200 entries, and that's why the comments are being counted incorrectly already. Our class has a total of 130 blog posts so far, and if that surpasses 200, the count will begin to be incorrect as the comments counting is. I wish there was something I could do, but I guess we'll all have to resort to counting everything by hand from now on (or using Colin's way). I have contacted the developers of the API, but since it's through Google, who knows if they'll ever even respond...especially before our semester is over.

Sorry for the false hope in this application, but I don't see any way around this 200 entry limitation that Blogger has on their API. The most annoying part of it all is that the limit is not stated in the documentation anywhere! I spent countless hours trying to debug my JavaScript, assuming I had made a programmatical error along the way while counting the comments, but after a while I realized it wasn't my code. I began searching the Web to see if there was any sort of limitation and sure enough, 200 is the maximum!

If anyone can think of any neat ways to hack the thing or pull the information back differently from the API, I'd be more than happy to experiment with it some more, but I just don't think there's much of a way around it. I'm going to look into the different querying options they offer to see if I can query to different time frames or something than do the math once the two different results come back. Even so, that would still be a limit of 400, meaning the comments still would be incorrect.

If something ends up changing I'll be sure to post back with the results, but in the meantime I'm afraid we've reached the end....I guess it's like they say....All good things must come to end. LOL

Here's the link again for those who still want to use it to count the posts, but remember that once we surpass 200 posts as a class, that count will be off as well.

http://people.emich.edu/mmager/449


UPDATE:

I fixed it everybody!! The code to make this work is now a complete mess, but I basically had to make 8 different AJAX calls to the blogger API dividing it up in the query on a two week basis rather than getting it all at once. This allowed me to count the comments for each two weeks then add them all together in the end to get the total. You guys will notice your numbers are much higher and actually accurate now!

Getting a Yahoo Pipes Badge

Continuing along the Yahoo Pipes topic, a question was raised as to how to publish a pipe on a website. Based on the documentation for the website I learned this can be accomplished with the help of a Pipes Badge. As stated in the documentation, "A Yahoo! Pipes badge allows you to have Pipes generated content on your blog, website, or social network".

There are three types of badges; a list badge, an image badge, and a map badge. The list badge is used for a simple list of items, description fields, and thumbnails. An image badge shows images present on a feed, displays images as a thumbnail, and description field if one is available. The third badge type is a map badge. If geocoding is available a dragable map is displayed with pin points.

To get a badge all that is required is to open a pipe by logging into yahoo.com and visiting pipes.yahoo.com. Next click the "Get as a badge" wizard seen on the pipe.






Next, select where the badge will be placed. If it's a basic web page select Embed.





Finally, you are presented with the javascript to use on the page which can be copied/pasted into your html file.




Here's how the pipe I created appears on a web page:

Once a pipe is created it's quite simple to integrate it into a web site. I thought this would be much more complicated than it was but after reviewing the documentation and trying it out myself I discovered it's super easy!