×
INTELLIGENT WORK FORUMS
FOR COMPUTER PROFESSIONALS

Log In

Come Join Us!

Are you a
Computer / IT professional?
Join Tek-Tips Forums!
  • Talk With Other Members
  • Be Notified Of Responses
    To Your Posts
  • Keyword Search
  • One-Click Access To Your
    Favorite Forums
  • Automated Signatures
    On Your Posts
  • Best Of All, It's Free!

*Tek-Tips's functionality depends on members receiving e-mail. By joining you are opting in to receive e-mail.

Posting Guidelines

Promoting, selling, recruiting, coursework and thesis posting is forbidden.

Students Click Here

500 Server errors: fun...

500 Server errors: fun...

500 Server errors: fun...

(OP)
I've been given a job nobody would ever want. I'm sure you all know that you can take an Microsoft Excel document and get it to spit out a webpage. Anybody who hates WYSIWYG editors knows just how many nasty table tags they spit out. My boss at a webdesign firm decided to make a catalog using Excel, and then export to HTML.

Now I have an HTML document weighing in at 1.5 MB.

That's crazy. He handed it to me and said, "it used to load so quickly- I don't know what happened to it. Could you take a look?" Hehehe... one look at the code and I knew what happened to it. TD tags up the yin yang and all sorts of code the W3C has never seen. So now I have to get rid of them all... At first, I tried cutting and pasting the necessary stuff. Basically, I started over from scratch. Well, the catalog is huge- if I trim all the fat I can, it's going to be 300K. That's a lot of typing to do by hand.

So I thought I'd try writing a Perl CGI program that would extract things out of the table and put them into a new, nicer, cleaner table (with 4 columns instead of 40). Now the old server I used to program on ran Perl 4, but now that I'm on a new server, I'm trying to take advantage of Perl 5 using some of the new parsing code. I went to a tutorial website, copied things word for word, but it won't work. Help :o) can you guys tell me what's wrong here? Thanks... here's my code.


#!/usr/local/bin/perl
use CGI;
use LWP::Simple;
use HTML::TokeParser;

$cgiobject = new CGI;

#retrieve web page
$fetchURL=$cgiobject->param("name");
unless ($fetchURL)
{$fetchURL=""}
$webPage=get($fetchURL);


$p = HTML::TokeParser->new(\"http://www.site.address.com/~directory/page-name.htm";);
print $cgiobject->header;
$parser->get_tag("title");
print "Content-type: text/html\n\n";
print "$parser->get_trimmed_text";




This is not working...

another note- if I put print "Content-type: text/html\n\n"; towards the beginning of my program, everything after it tends to be ignored for some reason- I'm not sure why. The tutorial I've been working with often has something like that towards the top (usually, they have print $cgiobject->header;, with the same result).

Liam Morley
lmorley@wpi.edu
:: imotic ::

Red Flag This Post

Please let us know here why this post is inappropriate. Reasons such as off-topic, duplicates, flames, illegal, vulgar, or students posting their homework.

Red Flag Submitted

Thank you for helping keep Tek-Tips Forums free from inappropriate posts.
The Tek-Tips staff will check this out and take appropriate action.

Reply To This Thread

Posting in the Tek-Tips forums is a member-only feature.

Click Here to join Tek-Tips and talk with other members! Already a Member? Login

Close Box

Join Tek-Tips® Today!

Join your peers on the Internet's largest technical computer professional community.
It's easy to join and it's free.

Here's Why Members Love Tek-Tips Forums:

Register now while it's still free!

Already a member? Close this window and log in.

Join Us             Close