Auto fetch webpages/sign-in websites with curl and perl

#1 - Use curl to sign-in Ebay:
a. create a cookie jar from ebay main page
curl -s -L -c cookies.txt -b cookies.txt -e ';auto' -o ebay.html ''

b. download sign-in page
curl -s -L -c cookies.txt -b cookies.txt -e ';auto' -o ebay-signin.html ''

c. login with your ebay account and password
curl -s -L -c cookies.txt -b cookies.txt -e ';auto' -o ebay-pass.html \
-d MfcISAPICommand=SignInWelcome -d siteid=0 -d co_partnerId=2 \
-d UsingSSL=1 -d ru= -d pp= -d pa1= -d pa2= -d pa3= -d i1=-1 \
-d pageType=-1 -d rtmData= -d userid=your-ebay-account -d pass=passwd \

d. save your 'my-ebay' page
curl -s -L -c cookies.txt -b cookies.txt -e ';auto' -o myebay.html ''

Now, use a browser to open local file myebay.html, you see your MyEbay page if you entered your password correctly.

#2 - Use curl to download a range of pages from a website.
( create a directory "tmp" in your home directory to hold the download html pages. )
# this will download 21 files ch1.htm to ch21.htm and save them as ./tmp/ch-1.html ... ./tmp/ch-21.html
curl[1-21].htm -o ./tmp/ch-#1.html

Simple Perl script to fetch page

use LWP::UserAgent;
use HTTP::Request::Common qw(POST);
$UA = LWP::UserAgent->new();
$req = HTTP::Request->new( GET => "" );
$resp = $UA->request($req);
# check for error. Print page if it's OK
if ( ( $resp->code() >= 200 ) && ( $resp->code() < 400 ) ) {
print $resp->decoded_content;
} else {
print "Error: " . $resp->status_line . "\n";

(to be continued...)