logging the HTTP traffic and deriving your script manually

As a first step on your way to automating the access to a web server you will employ helpers like LiveHTTPHeaders to Firefox and ieHTTPHeaders to Internet Explorer, they are rather indispensable. They display resp. reveal the HTTP network traffic between browser and server.

With some experience it looks quite straightforward to derive a program e.g. in perl using WWW::Curl::Easy from the log output of ieHTTPHeaders and also of LiveHTTPHeaders.

When we started employing LiveHTTPHeaders and ieHTTPHeaders, we did derive perl scripts from these helpers' output. It was still quite tedious.

But you know what happens a while, after you get your script running, don't you? The HTML, that you depend on, or the web-page linking gets changed. Your script breaks. Do you want to practice that rather tedious exercise or deriving and augmenting that script again? We expect, such changes will keep happening now and then.

And also the more opportunities you find to automate such web interface accesses, the more you find it boring and error-prone to derive your scripts manually.