Auto fetch webpages/sign-in websites with curl and perl

#1 - Use curl to sign-in Ebay:
a. create a cookie jar from ebay main page
curl -s -L -c cookies.txt -b cookies.txt -e ';auto' -o ebay.html 'http://www.ebay.com/'

b. download sign-in page
curl -s -L -c cookies.txt -b cookies.txt -e ';auto' -o ebay-signin.html 'http://signin.ebay.com/ws/eBayISAPI.dll?SignIn'

c. login with your ebay account and password
curl -s -L -c cookies.txt -b cookies.txt -e ';auto' -o ebay-pass.html \
-d MfcISAPICommand=SignInWelcome -d siteid=0 -d co_partnerId=2 \
-d UsingSSL=1 -d ru= -d pp= -d pa1= -d pa2= -d pa3= -d i1=-1 \
-d pageType=-1 -d rtmData= -d userid=your-ebay-account -d pass=passwd \
'https://signin.ebay.com/ws/eBayISAPI.dll?SignIn&UsingSSL=1&pUserId=&co_partnerId=2&siteid=0'

d. save your 'my-ebay' page
curl -s -L -c cookies.txt -b cookies.txt -e ';auto' -o myebay.html 'http://my.ebay.com/ws/eBayISAPI.dll?MyEbay'

Now, use a browser to open local file myebay.html, you see your MyEbay page if you entered your password correctly.


#2 - Use curl to download a range of pages from a website.
( create a directory "tmp" in your home directory to hold the download html pages. )
# this will download 21 files ch1.htm to ch21.htm and save them as ./tmp/ch-1.html ... ./tmp/ch-21.html
curl http://docs.rinet.ru/P7/ch[1-21].htm -o ./tmp/ch-#1.html



Simple Perl script to fetch page

#!/usr/bin/perl
use LWP::UserAgent;
use HTTP::Request::Common qw(POST);
$UA = LWP::UserAgent->new();
$req = HTTP::Request->new( GET => "http://www.ebay.com/" );
$resp = $UA->request($req);
# check for error. Print page if it's OK
if ( ( $resp->code() >= 200 ) && ( $resp->code() < 400 ) ) {
print $resp->decoded_content;
} else {
print "Error: " . $resp->status_line . "\n";
}

(to be continued...)