Well the way vi does it isn't magic.
The whole file is loaded into memory, then they move everything below that 1st line "up" in memory and finally write the whole file back out to disk. This works great until you get huge files (a megabyte or up for most modern systems).
In script you can open the file using the FileSystemObject, suck it all into memory using ReadAll, find the end of the 1st line, snip that out using something like the Right( ) string function, and flush it back out to disk. For a large file you might open the file, read it using ReadLine, drop the 1st line, copy the rest line-by-line back to disk in a new file, close the files, and remove the old one and rename the new one to the original name.
There are limits to how fast FileSystemObject I/O can be made to run, but a combination of these two techniques is best for performance with flexibility:
Open the original file, read the 1st line using ReadLine, discard it. Then in a loop until EOF, read in large chunks of data using the Read method, specifying a large chunk size like 65,536 bytes and write these chunks back out to a new file. Then close the files, delete the old file, rename the new to the original name as above. You might want to experiment with the chunk size to find a good value here. In theory you can use up to 2 billion (2 U.S. billion: 2,000,000,000) bytes per chunk. Don't do this! I suspect something bigger than 500,000 will start to show seriously diminishing returns due to memory contention and virtual memory processing in general.
But I don't know of anything fancy to just tell the system to "get rid of line 1."