×
INTELLIGENT WORK FORUMS
FOR COMPUTER PROFESSIONALS

Contact US

Log In

Come Join Us!

Are you a
Computer / IT professional?
Join Tek-Tips Forums!
  • Talk With Other Members
  • Be Notified Of Responses
    To Your Posts
  • Keyword Search
  • One-Click Access To Your
    Favorite Forums
  • Automated Signatures
    On Your Posts
  • Best Of All, It's Free!

*Tek-Tips's functionality depends on members receiving e-mail. By joining you are opting in to receive e-mail.

Posting Guidelines

Promoting, selling, recruiting, coursework and thesis posting is forbidden.

Students Click Here

Use a separate file, file type or other method to access nearly 29 million addresses
4

Use a separate file, file type or other method to access nearly 29 million addresses

Use a separate file, file type or other method to access nearly 29 million addresses

(OP)
Hello all

I have the full and up to date set of UK addresses and postcodes obtained from the Government website. These were downloaded in CSV format and I have managed to separate them into smaller CSV files and stripped the columns that were not required.

I’m now left with nine columns which I have replicated in a DBF. The field names are called:

CODE -->

datetimeup (c,16)
postcode (c,12)
numbname (c,30)
flatnumb (c,30)
add01 (c,50)
add02 (c,50)
add03 (c,50)
add04 (c,50)
add05 (c,50) 

I am aware that a DBF has a size limit of 2GB or 1 billion records.

I put together and ran a piece of code in an effort to append from those CSV files but as I am sure you have guessed by now, the DBF quickly fills up and creates the error message, “postcodes.dbf is too large”.

CODE -->

USE postcodes EXCLUSIVE
ZAP
a=0
DO WHILE .T.
  APPEND FROM "pp-complete_"+LTRIM(STR(a))+".csv" ;
    DELIMITED WITH , WITH CHARACTER ,
  a=a+1
  IF a=29
    EXIT
  ENDIF
ENDDO 

The purpose of this project is for someone to be able to enter a postcode, search for it and the address will be auto filled elsewhere.

I only need to know the following:

Is there a separate file, file type or other method I can use to access nearly 29 million addresses?

Any pointers or suggestions would be much appreciated.

Thank you

Steve Williams
VFP9, SP2, Windows 10

RE: Use a separate file, file type or other method to access nearly 29 million addresses

Could it be with insufficient memory?
Klaus

Peace worldwide - it starts here...

RE: Use a separate file, file type or other method to access nearly 29 million addresses

(OP)
Hello Klaus
No. The DBF stops when it reaches 2 gigs.
It’s a known fact that DBF’s have a size limit.

Thank you

Steve Williams
VFP9, SP2, Windows 10

RE: Use a separate file, file type or other method to access nearly 29 million addresses

CODE --> code

USE postcodes EXCLUSIVE
ZAP
a=0
DO WHILE .T.
  APPEND FROM "pp-complete_"+LTRIM(STR(a))+".csv" ;
    DELIMITED WITH , WITH CHARACTER ,
  a=a+1
  IF a=29
    EXIT
  ENDIF
ENDDO 



For me it looks like you try to append from 29 *.csv files
1, APPEND FROM "pp-complete_"+LTRIM(STR(0))+".csv"
2. APPEND FROM "pp-complete_"+LTRIM(STR(1))+".csv"
3. APPEND FROM "pp-complete_"+LTRIM(STR(2))+".csv"

and so forth up to 29

What is the real name (not the calculated one) of your *csv.file?
May be that I missunderstand something.

Klaus



Peace worldwide - it starts here...

RE: Use a separate file, file type or other method to access nearly 29 million addresses

(OP)
That’s correct. I split the files because the original CSV file was too large (13gb) hence why I split it and removed data I didn’t require.
It would make no difference if I merged all 29 CSV files into one because appending from them would still be too large for a DBF.
I hope that makes sense.

Thank you

Steve Williams
VFP9, SP2, Windows 10

RE: Use a separate file, file type or other method to access nearly 29 million addresses

Have you tried to do it only with one *.csv - and if yes - does that lead to “postcodes.dbf is too large”?
What is the size of one *.csv in MB about (as you have splitted and also deleted columns, it should be less 500 MB, right?


Peace worldwide - it starts here...

RE: Use a separate file, file type or other method to access nearly 29 million addresses

(OP)
It does work with about five of the CSV files and then stops.

Thank you

Steve Williams
VFP9, SP2, Windows 10

RE: Use a separate file, file type or other method to access nearly 29 million addresses

Perhaps in one of the residual files (No 6 ff.).. there has been something destroyed, when you deleted columns in it.
I would test it, by transferring from *.csv No. 6 then *.csv No. 7 and so on.
It would also be useful to know, how big each *.dbf - size would be (in records AND size)

Instead of gathering all in one *.dbf I would also try to build 29 *.dbf files and later try to append one after another.

This could be done by changing your programm a little bit.

Also it would perhaps be more clear, when you try to transfer into EXCEL with the Excel-import from text - function - just to see, what that will cause there

If everything is ok, then maybe it would be possible to search in
several Foxpro files to start, depending on the main memory
reloaded or can be in the main memory at the same time.

There are many experts here who can say more about this.


Klaus


Peace worldwide - it starts here...

RE: Use a separate file, file type or other method to access nearly 29 million addresses

Divide 2GB by 29 million and that's roughly 64 Bytes per address. Of course you fail with 340 bytes per record. I assume add04 (c,50) is the city. Then there's a lot to reduce. You don't want to store "London" several million times, do you?

Store the cities, streets, etc. in separate tables and only store IDs in the address table: CityID, StreetID. That's data normalization, in short.

Chriss

RE: Use a separate file, file type or other method to access nearly 29 million addresses

You could also try storing each of the address lines as a memo field instead of a C(50).

A C(50) containing an address line is going to be almost all padded spaces. A memo field isn't.

Mike

__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro articles, tips and downloads

RE: Use a separate file, file type or other method to access nearly 29 million addresses

Let's take it from address parts and how to put them together to address lines, then any address (with a ton of exceptions) is like

CODE

<<Addressee>>
<<Company>>  (Remark: optional for mail to an employee, for example)
<<House number or name>> <<Street>>
<<City>>
<<Postcode>>
UK (Remark: for international mail sent to the UK) 

And then there are PO Boxes and other address types and details differing from that scheme. But let's stick with that for the moment.
Then the actual address data could simply be a tuple of (ID, AddresseeID, CompanyID, HouseNumber, StreetID, CityID). And that's not overexaggerating normalization as you talk of 29 million addresses of the UK.

A query getting the addresslines would then start in an address record and join addressee name, company, street, city, and whatnot.

As it seems you only have one CSV with all address details as it results from such a multiple join query. Well, whatever isn't available separately in the CSV would need to be taken from the address lines, but I'd not store them as is from the CSV into a DBF. Just for one step of an import. Neither char nor memo justify storing "London" for each London address. It'S one core idea of database design.

Assuming add04 normally has the city name, then you get a list of all cities by Select Distinct add04 as City From import.dbf after the import of the (partial) CSV file. You'll have a hard time getting a clean list of cities when even this governmental data is not including the address parts as separate, single fields and only has the address as address lines. Perhaps not always using the same format. But at least it seems you have some details separated already, like the postcode field.

Chriss

RE: Use a separate file, file type or other method to access nearly 29 million addresses

(OP)
I appreciate the responses guys. I’ll work through them and post back.

Thank you

Steve Williams
VFP9, SP2, Windows 10

RE: Use a separate file, file type or other method to access nearly 29 million addresses

Hello,

maybe it helps to put the lines for add1-4 alltrimmed() into a memo with 4 or less lines
if !empty(alltrim(m.add1))
cMemo = cmemo + alltrim(m.add1) + chr(13)+chr(10)+...
endif
...
Or instead of linebrakes separate them like #add1# <add2>,@add3@ , (add4) or better <a1>add1<\a1>,... if you need to search for content in specific add.

regards
tom

RE: Use a separate file, file type or other method to access nearly 29 million addresses

I agree with

Quote (Chris Miller)

Store the cities, streets, etc. in separate tables and only store IDs in the address table: CityID, StreetID. That's data normalization, in short.
For once I don't agree with Mike (about Memos), that would only make the problem worse. IMHO there are only 3 solutions:
1. Go with Chris's suggestion and split the info into several DBFs linked together.
2. Use VFP Advanced which doesn't have the 2GB limit (but is of course maybe not so legal).
3. Use a real SQL-database.

RE: Use a separate file, file type or other method to access nearly 29 million addresses

Since it's fixed data that's not going to be edited, write some code to go through and figure out the maximum size of the actual data for each of the 5 address lines. Then create the table with those fields only as wide as you need them.

If it were me, I'd have a loop that collected that information for each of the 29 files and throw it into a cursor. Then, just run a query against that 29-row cursor to find the max for each column.

Tamar

RE: Use a separate file, file type or other method to access nearly 29 million addresses

Well, Dan,

With memos, you can not only store each line with a minimum size which can still cost more space because of technically necessary 8 bytes for DBF and FPT, especially when combining all address lines into one memo. The necessary linefeed can be CHR(10) only, and you still store much less than all the spaces wasted in the separate char fields, even when they are optimized. When the longest names are far longer than the average length (and that's very common) even a length optimized char field can cost a lot. With static data you dont have memo bloat, you only store each memo value once anyway, so you can set blocksize to 0.

Besides that, to stay with my example of London, even though it'Äs just a 6 letter city name, an ID for it will cost 4 byte per London address storing a CityID, while it takes 6 bytes in a char or memo. In an optimized ccity field you'll likely need something like char(20) and even waste 14 bytes per London address.

Chris.

RE: Use a separate file, file type or other method to access nearly 29 million addresses

Quote (Dan)

For once I don't agree with Mike (about Memos), that would only make the problem worse.

Dan, my memo suggestion might not be the best solution on offer[note], but I'm curious to know why it would make the problem worse. Is it to do with the 8-byte overhead for each memo field? If so, surely that would only be an issue if the average length of an address line was less than eight bytes, which I wouldn't have thought was the case.

Note: I quite like Tamar's suggestion of figuring out the actual maximum line length. It is likely to be considerably less that 50 chars.

Mike

__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro articles, tips and downloads

RE: Use a separate file, file type or other method to access nearly 29 million addresses

(OP)
Thank you for all these responses guys. I have found some of the records are corrupt in that some postcodes are either missing or have the incorrect information in some of the CSV files.

I also noticed that in some of the columns, the data contains an address such as: "The Building, Somewhere" The fact that there is a comma between Building and Somewhere, this causes Somewhere to be shifted into the next field when appended into the DBF, thus making a lot of the records out of alignment. I am aware that this is because the delimiter is a comma.

I will endeavour to have a look over the next few days on stripping back and checking these CSV files, after which I'll post back with an update.

Thank you

Steve Williams
VFP9, SP2, Windows 10

RE: Use a separate file, file type or other method to access nearly 29 million addresses

Oh my, and that's about it regarding the IT capabilities of governments.

Just one questions: Are names like that in quotes? Because there VFP has a weakness. You can output CSV data quoted (so commas within a quote delimited string don't count as field separator). But VFP can't read in such data, especially when the quoted string spans multiple lines, which then also is allowed.

Instead of importing into VFP DBF, Excel could do a better import job, and then you can address this by OLE automation. (Append FROM XLS is unlikely to work, even when you save in legacy binary format, but with OLE automation you can also get a region of data quickly over to VFP.

Chriss

RE: Use a separate file, file type or other method to access nearly 29 million addresses

Steve,

You can open a csv file in Excel as Chris suggested.

To overcome the embedded commas, Save as Type: Text (Tab-delimited) (*.txt).

You can then parse the text file using tab delimiters instead of commas. Of course embedded tabs could be a problem but I found that to be highly improbable occurrence.

HTH

Steve


RE: Use a separate file, file type or other method to access nearly 29 million addresses

Quote (Mike Lewis)

if the average length of an address line was less than eight bytes

The 8 byte overhead of a memo isn't a problem because it's for the whole address, not because each line is longer than 8 bytes.

Still, the normalization of addresses with multiple tables takes the least size of the address table itself. You surely need more disk space, but I don't think that's the problem.

Chriss

RE: Use a separate file, file type or other method to access nearly 29 million addresses

Steve,

If it's not sensitive data, can you post one of your CSV files here, please?

RE: Use a separate file, file type or other method to access nearly 29 million addresses

Quote:

If it's not sensitive data, can you post one of your CSV files here, please?

Probably not sensitive, but almost certainly proprietary PAF data and therefore subject to strict licensing conditions.

At least, that's my understanding. Steve will know more about that than me.

Mike

__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro articles, tips and downloads

RE: Use a separate file, file type or other method to access nearly 29 million addresses

Quote:

The 8 byte overhead of a memo isn't a problem because it's for the whole address, not because each line is longer than 8 bytes.

That's interesting, Chris. Are you saying that, if a record contains multiple memo fields, the total overhead for the record is still only eight bytes? Or are you thinking that there would only be one memo field for each address?

Mike

__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro articles, tips and downloads

RE: Use a separate file, file type or other method to access nearly 29 million addresses

Why use more than one memo field? You can still separate the address lines, well, by CRLF or LF only - CHR(10).

To be precise that'll make it 8+4=12 bytes "wasted". But even in a char field solution with optimized char fields, assume the max length is an outlier and average length is rather just 50-60% of that, then you still waste 40-50% of the bytes. What you save also is ery clear 29 million x the sum of char fieeld shortenings.

If you use 5 memo fields that only pays when you save more than 8 chars in comparison with Tamars' method.

Chriss

RE: Use a separate file, file type or other method to access nearly 29 million addresses

That's very clear,Chris. I had originally assumed a separate memo field for each line of each address, because I sort of thought it would make processing the address lines easier. But I can see now that that is not really an issue.

So, on that basis, moving the address lines to a memo field would be a very big saving. But that's not to say that Steve should rule out any of the other suggestions.

Mike

__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro articles, tips and downloads

RE: Use a separate file, file type or other method to access nearly 29 million addresses

Also consider, when using memo fields the final size can be up do 4 GB (2GB dbf + 2GB fpt), because the file size limits are separate for dbf und fpt file.

Manni

RE: Use a separate file, file type or other method to access nearly 29 million addresses

There's a crux in that 4GB argument. To make use of it you'd need a balance of data in DBF and memo. The address lines will make up most of the data and the 2GB fpt limit hits you earlier than the 2 GB dbf limit, but also hinders you to add further data, further records then only could work with NULL memos.

Chriss

RE: Use a separate file, file type or other method to access nearly 29 million addresses

I have followed the discussion so far with great interest.

The main problem is VFP's 2GB capacity.
If I understand correctly, you want to have additional information about a postcode in a large * .csv file (comma-separated text file).

This problem increases the larger the volume of data processing data for Foxpro is.
(It could also have been that the request came for 50 million or more addresses).

So I thought about a different approach -

instead of
recreating (with a lot of effort) a VFP file to have the information needed reduced - why can we not leave the big text file as it is (or split it into portions )
and first of all isolate the search term with additional information in a separate text file.?

Only then can you access this separate file with VFP, and
compare their information with the input of the user.


Here is a model with 5 files:
1 batch file
3 text files (so as not to overload the main memory,
divide the large text / csv file into 3 or more text files.
1 result file



The batch file (such.bat) looks like this:



The content of the text file (No1.txt to No3.txt with different contents) e.g. like this:



The output file (Result.txt) now receives this content, after all 3 text files can be searched very quickly with FINDSTR (Windows integrated execution file):



For example, the FINDSTR command is. explained here in more detail.
https://en.wikipedia.org/wiki/Findstr

As my model shows, text files can also be used on many files
can be accessed one after the other (important in order not to overload the main memory) - and if you are even more familiar with batch files than I am, you may be able to access these files with a
Specify loop (No1.txt to Nonn.txt) in the batch file.
With a text file of 1 GB (approx. 800,000 lines) I could not find any delay in the search in the test.

I would be very interested in your opinion.

Maybe I'll make another mistake in this.

Best regards Klaus

Peace worldwide - it starts here...

RE: Use a separate file, file type or other method to access nearly 29 million addresses

Klaus,

Thank you for your contribution, which looks very interesting, and could be worth following up.

However, you said: "why can we not leave the big text file as it is". Are you aware that the 2 GB limit also applies to text files, not just to DBFs or FTPs? That's true whether you want to import or export the file, append or copy it, access it with low-level file functions, or read it into a memory variable.

It's possible to use the File System Object to process text files larger than 2 GB, but that would be horribly slow, given the overhead involved.

This doesn't invalidate your suggestion, as you also mentioned the possibility of splitting a large text file into smaller portions. In fact, I think Steve said he had already done that. But it is something to keep in mind.

Mike

__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro articles, tips and downloads

RE: Use a separate file, file type or other method to access nearly 29 million addresses

Mike,

Klaus' idea isn't bad, because the search for parts of interest is done by the shell command findstr, which doesn't have that 2GB limit. The output it creates usually will be much shorter (if you don't search for something as short as a single letter) and can be read from VFP. What I think is not good with this solution is the performance.

I think when you explicitly tell Windows to index the csv files you could use Windows search from VFP to find the right file. Keeping just one file then would not be very productive. This would need splitting the file into very many small ones of which a search can point to 1 or a few. That will help reduce the per search reading in of data.

There also are drivers addressing CSV (txt) through ODBC, but that'll be slow as you have no indexes. So I'd modify Klaus's idea to first split up into say MB sized portions and let Windows index them, then use Windows search to find something and read that into a DBF format for further processing.

But overall I still think you could make much more use of importing into multiple DBFs rather than one.

Chriss

RE: Use a separate file, file type or other method to access nearly 29 million addresses

german12 (Programmer)
(OP)
21 Jun 21 18:46
Mike and Chris,
Thank you both for your quick comment.

Mike when I wrote "why can we not leave the big text file as it is"
Was it meant in such a way that we shouldn't change the type at first (i.e. convert it to a VFP file) - I shouldn't have mentioned the word "big" here, I agree with you. (Difficulties between languages are sometimes a problem for me).
I also noticed it during testing when I wanted to fill a program (Notepad +++) with data - at some point I got the message "file too big".
Therefore also a split of such files.
I also noticed that FINDSTR glided across the files very quickly.

Chris: Your reference to indexing the text files with Windows is
a good hint.
I've never needed this before - but can you tell me how to do it?
I would like to try and create a lot of very large text files indexed on my own - because I'm curious how the performance is then in connection with FINDSTR

Danke,
Klaus

Peace worldwide - it starts here...

Peace worldwide - it starts here...

RE: Use a separate file, file type or other method to access nearly 29 million addresses

(OP)
Appreciating all these replies. Still looking at the original file from the government website and will post back soon.

Thank you

Steve Williams
VFP9, SP2, Windows 10

RE: Use a separate file, file type or other method to access nearly 29 million addresses

Quote:

I also noticed it during testing when I wanted to fill a program (Notepad +++) with data - at some point I got the message "file too big".

That's right. Notepad++ has a limitation of 2 GB.

The version of Notepad that comes with Windows has, I believe, a lower limit (512 MB?). If you think that's bad, the limit was 64 KB in Windows 95 (if I remember rightly).

Mike

__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro articles, tips and downloads

RE: Use a separate file, file type or other method to access nearly 29 million addresses

Thanks Chris

meanwhile I found this article written in german (easier for me) concerning indexing with Windows
https://www.giga.de/downloads/windows-10/tipps/win...
Translation by Google (Pictures visible in that link)

Windows 10: adjust, deactivate or activate indexing for quick searches - Here's how
Robert Schanze


Windows 10, like its predecessors, creates an index of certain files to enable a quick search for them. The process is also called indexing.
We'll show you how you can adjust the indexing options, add more folders for quick searches, deactivate or activate them completely.

By default, Windows 10 indexes the Users folder, offline files, and the Start menu. Program and system files are not indexed because they are searched for less often. We'll show you how to add additional folders and locations for indexing, deactivate or activate them.

Windows 10: Adjust indexing for quick searches

To adjust the indexing of files in Windows 10, you do the following:

Open the indexing options by pressing the key combination [Windows] + [R], typing in

control.exe srchadmin.dll

and confirming with the Enter key.
In the new window above you can see how many elements have been indexed.

Windows 10: Here you add or remove new folders for file indexing.

To determine which file types should be indexed, click on the "Advanced" button and then on the "File types" tab. Places a check mark in front of the file types that are to be indexed. For each type, you can choose whether only the properties or file contents should be indexed. Confirmed with OK.
To add new folders and locations of the indexing, you click on the button Change. The "Indexed Locations" window opens. Click on the button "Show all locations" below. At the top you can expand the tab of your hard drive and add more locations by checking the box in front of the respective folder. In the lower part of the window, a summary of all the places that Windows 10 indexes is displayed. Confirmed with "OK". Indexing may take a while, depending on the folders you selected. It continues to run in the background.
To stop the indexing, click on the "Stop" button.
Under the Advanced button you will find the “Create new” button in the “Index settings” tab. This will rebuild the index in Windows 10 if you should have problems with it.
When you have adjusted the indexing options, click on the "Close" button.

Those who do not need indexing of files can deactivate them if necessary.

Klaus

Peace worldwide - it starts here...

RE: Use a separate file, file type or other method to access nearly 29 million addresses

Mike - I think you are right.
On Windows 95, 98 and Me, Notepad could not open files larger than 64 KB.

Wow! What a huge volume! That is 0,064 MB

If the current Notepad ++ can manage up to 512 MB, then it is
that better "only" a factor of 512 / 0.064 = 8000 !!


Not everything was better before...
Klaus

Peace worldwide - it starts here...

RE: Use a separate file, file type or other method to access nearly 29 million addresses

(OP)
Just an update on my quest:

After looking at all your suggestions and browsing through the 4GB+ file of postcodes which I managed to split with a software application called "CSV Huge Splitter" downloaded from SourceForge.net, it appears that there are multiple inconsistencies in the files in that some postcodes are missing, addresses contain inverted commas or commas. So even after trying to remove them and testing different ways to achieve just a small batch of postcodes to work with, I am unable to do this and for what it's worth, it's not.

Whilst not wishing to be defeatist, I have abandon this little project as I've already spent too much time but needless to say, I'm grateful for the contributions on this thread.

Best wishes and stay safe guys.

Thank you

Steve Williams
VFP9, SP2, Windows 10

RE: Use a separate file, file type or other method to access nearly 29 million addresses

OP's original premise is off. There is not a 1:1 correlation between a UK postcode and an address.
A UK postcode will generally break down to the "postman's walk", similar to a US ZIP+4.
With the postcode AND a street number, you'll most likely get a hit.

I've done something similar with USPS data. This allows call centres to just ask for a ZIP and street number, and then they will confirm the full address back to the caller.

With that in mind, you'll want to preprocess that data until it is crying and you are too.

UK postal addresses are some of the most confusing on the planet : http://www.columbia.edu/~fdc/postal/#uk

All of the other items that other contributors have been brought up will help:
  • Town Names into a separate table, with an FK to that ID rather than the town name. The format you show would seem to indicate that the LAST used 'add' field would have the town naame
  • Process each column (across ALL files) to determine lengths that are actually required.
  • Perhaps, create a pipe-delimited format such as numbname|flatnumb|add01|add02|add03|add04|add05, that you unpack when displaying
  • Determine if the overhead of using an FPT (Memo field) for the delimited address information is better. There is an overhead to loading the FPT and the Memo pointer size, but each address is only as long as it needs to be with no waste. Since this data is readonly, you won't encounter Memo bloat.
  • You may want to normalize/compress some other address data "ROAD"-->"RD", "LANE"-->"LN", but you have to be careful you're not compressing the wrong part of an address. USPS data is very structured in that regard, but addr01-addr05 tells me that the Royal Mail isn't so structured. How do you uncompress "ST" ? Is it Saint or Street? I would leave this for a much later analysis.
Without a method to compress/uncompress the address data, you're going to have to break the dataset into pages, and load those pages in and out ('paging').

In fact, the 29 files they provided are a good starting point for paging the data.
You can keep the data in 29 separate DBFs, index them on the postcode.
For speed, you might even want to consider breaking down the size to smaller files. You'll be opening and closing tables often, so you want a smaller overhead when loading. You'll want to find the sweet spot between NEEDING to 'page' and the IMPACT of 'paging' (bigger pages means less often, but slower load/unload times. smaller pages mean more often, but faster load/unload times). Smaller tends to give a more consistent response.

Something like:
POSTCODE CHAR(12) PK
TOWNID INT (fk to your TOWNS table)
ADDRESS CHAR(??) or MEMO in a packed/delimited format like above

You'll want to check that no postcode straddles two files. If they do, move them between files until a postcode is only found in ONE file

So now you have
POSTCODE01.DBF --> POSTCODE29.DBF (or whatever)

Then create your main index table (INDEX01.DBF)
POSTCODE (PK, indexed)
ADDRESSTABLE (INT)

Load Distinct postcodes from each POSTCODExx.DBF file into your index
INSERT INTO INDEX01 (postcode,addresstable) SELECT POSTCODE,1 FROM POSTCODE01
...
INSERT INTO INDEX01 (postcode,addresstable) SELECT POSTCODE,29 FROM POSTCODE29

When searching, find the postcode in the INDEX01 table, which will tell you which DBF to open to find the addresses for that postcode
If you are using this interactively, you may want to 'remember' which file you have open, so that if the index points you to the same table again, you can skip the unload/load cycle.

So, a search runs like:
* If no hit in INDEX01 : Invalid Postcode
* If hit, then Load the POSTCODExx table the index tells you to, and SELECT records with matching postcodes into a cursor, where you can do further filtering by address components (house number, etc).

Now, you could also add some analytics (saving of search terms) to see which postcodes are queried most often. That might allow you to move the most queried postcodes into one file to minimize the paging.


RE: Use a separate file, file type or other method to access nearly 29 million addresses

Nice summary, brigmar.

Steve, now it would only still interest me what you downloaded from where.

The problem of commas within values is solvable, for example, with Excels better CSV niterpretation.
I already knew postcode isn't the only determining factor. It'll rather just be enough to know the post office the mail will be routed to. And therefore you'll have postcodes that range different cities or districts, streets that are so long, that a part of it is nearer to one postal office than another, so a street will have two postcodes, etc.

I have looked a bit and it's hard to find a source of data. I found a statement about data from 2017 census should have been worked into public open data, but that's still in progress...

Before I'd go for brigmar's advice, what actually is the intention of getting this data? Do you want to verify addresses? Or what else do you want to do with that data? Because there are APIs with addresses, that you could utilize easier than trying to maintain your own address data. This isn't just a one-off task, you'll need updates and the only way I see to do them is again find full sources and process them with all difficulties and partly manually necessary correction, etc.

Chriss

RE: Use a separate file, file type or other method to access nearly 29 million addresses

Yes, a good summary, Brigmar. I would only disagree with one small part of your post. You said:

Quote:

UK postal addresses are some of the most confusing on the planet

From the outside, they might look confusing at first glance. But they follow a logical structure and are really not that difficult to understand.

You might find my article on validating UK postcodes useful.
Mike

__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro articles, tips and downloads

RE: Use a separate file, file type or other method to access nearly 29 million addresses

Steve
In one of the previous posts, Atlope asked this:

Quote (Question by atlope)

If it's not sensitive data, can you post one of your CSV files here, please?
I realize that you can't because it's sensitive data that doesn't belong to us either.
My request would be this:

Can you perhaps - let's say - provide us with 10 data records in csv format from your csv-file that are typical (nothing missing) or not typical (inverted commas, too much commas)in your csv- file?

A bit of anonymity (different name, similar format for the postcode, different house number) - that would be enough to work with it as test material, and you could quickly change it manually with a few data records, so that there is no risk of data protection or ownership of the Data exists.

I would very much like to test how a group of many large text files in connection with the FINDSTR command is in terms of performance.

I would of course also publish the result here.


As you can see, a lot of people are interested in this subject.

It touches on the core question of when it is still worthwhile to convert large amounts of data into a VFP format first and then to continue working with it, or simply a visual comparison between entered data and a large cloud of data is required.

It also touches on the question of how to avoid main memory overflow during such transactions-
and how to avoid too much preliminary work if the amount of data available is poorly structured.

Danke
Klaus

Peace worldwide - it starts here...

RE: Use a separate file, file type or other method to access nearly 29 million addresses

(OP)
Hi Klaus

If anyone would like to download the full file, here is the link:

https://www.gov.uk/government/statistical-data-set...

There is also a monthly update file on that same page. As far as any sensitive detail is concerned, this isn't the case as the data is freely available but has a caveat shown on the Government website:

Quote:

If you use or publish our Price Paid Data, you must add the following attribution statement:

Contains HM Land Registry data © Crown copyright and database right 2021. This data is licensed under the Open Government Licence v3.0.

Good luck to all those interest and much appreciated to those who posted.

Thank you

Steve Williams
VFP9, SP2, Windows 10

RE: Use a separate file, file type or other method to access nearly 29 million addresses

Steve,

Using the CSVProcessor class at https://github.com/atlopes/csv,

CODE --> VFP

CREATE CURSOR curPricePaid (Identifier Char(38), Price Integer, RefDate Date, PostCode Varchar(8) NULL, ;
	Attrib1 Char(1), Attrib2 Char(1), Attrib3 Char(1), ;
	PAON Varchar(100) NULL, SAON Varchar(100) NULL, ;
	Street Varchar(100) NULL, Locality Varchar(100) NULL, City Varchar(100) NULL, ;
	District Varchar(100), County Varchar(100), ;
	Attrib4 Char(1), Attrib5 Char(1))

LOCAL CSVP AS CSVProcessor

m.CSVP = CREATEOBJECT("CSVProcessor")

m.CSVP.HeaderRow = .F.
m.CSVP.DatePattern = "%4Y-%2M-%2D 00:00"
m.CSVP.Workarea = "curPricePaid"

m.CSVP.Import(GETFILE("csv"))

SELECT curPricePaid
GO TOP
BROWSE 

importing the complete 2020 data (at http://prod.publicdata.landregistry.gov.uk.s3-webs..., 112MB), results in



I don't know what the Attrib* columns mean, but hopefully, you'll have some documentation for them.

Putting the large data into some persistent database storage will be another issue. But importing it in the already available chunks seems feasible and straightforward.

Edited info: beware that this takes some time to import, and the 2020 cursor fills 780488 rows.

RE: Use a separate file, file type or other method to access nearly 29 million addresses

Atlopes, your screen shot is showing land registry data, not the postcode address file which Steve is interested in. Did you post it as an example of how to use the CSVProcessor class? If so, that's fine. It serves that purpose well.

Quote:

I don't know what the Attrib* columns mean

In case anyone is interested, the land registry file shows details of all properties sold in England (and Wales?). Each row represents one such transaction. The attribute codes describe certain aspects of the property. So in the first row in your example, S means semi-detached, N means it is a new building, and F means freehold tenure.

Of course, this is all pretty irrelevant to the subject under discussion, but I thought I would post it anyway, in case anyone is curious.

Mike

__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro articles, tips and downloads

RE: Use a separate file, file type or other method to access nearly 29 million addresses

Mike,

Quote (Mike Lewis)

your screen shot is showing land registry data, not the postcode address file which Steve is interested in.
I believe that the postcode addresses that Steve is looking for are included in the Price Paid file.

Thank you for the hints regarding the Attrib* columns. From https://www.gov.uk/guidance/about-the-price-paid-d..., now it's possible to build a more meaningful cursor:

CODE --> VFP

CREATE CURSOR curPricePaid (Identifier Char(38), Price Integer, TransferDate Date, PostCode Varchar(8) NULL, ;
	PropertyType Char(1), OldNew Char(1), Duration Char(1), ;
	PAON Varchar(100) NULL, SAON Varchar(100) NULL, ;
	Street Varchar(100) NULL, Locality Varchar(100) NULL, City Varchar(100) NULL, ;
	District Varchar(100), County Varchar(100), ;
	Category Char(1), RecordStatus Char(1))

LOCAL CSVP AS CSVProcessor

m.CSVP = CREATEOBJECT("CSVProcessor")

m.CSVP.HeaderRow = .F.
m.CSVP.DatePattern = "%4Y-%2M-%2D 00:00"
m.CSVP.Workarea = "curPricePaid"

m.CSVP.Import(GETFILE("csv"))

SELECT curPricePaid
GO TOP
BROWSE 

RE: Use a separate file, file type or other method to access nearly 29 million addresses

There are many downloads, it would be good you point out which to download, steve.

The 'the complete Price Paid Transaction Data as a CSV file' is 3.7 GB, that still doesn't reflect the 13GB you mentioned. Perhaps that's the unzipped file size?

Since Mike thankfully explained wht this data means, are you sure that's what you looked for? I can imagine most houses/buildings were sold at least once, so you get all addresses, bu the topic isn't a full address registry but which real estate properties where sold.

Chriss

RE: Use a separate file, file type or other method to access nearly 29 million addresses

I'm pretty sure what Steve is using is the PAF, which is the official Postcode Address File. I've had some experience myself with this file, and it seems to fit the data that Steve has described.

The Land Registry file is related to the PAF in that it contains a postcode field, which can be considered a foreign key into the PAF. But the addresses in the Land Registry file are not necessarily the same as those in the related PAF record. The LR record includes fields such as Locality, City, District and County, none of which are relevant to the PAF.

This seems reasonable when you consider the different functions of these files. The data in the PAF is mainly intended to help deliver mail. The LR file is designed to identify the locations of properties which have been bought and sold. Also the PAF covers Great Britain, Northern Ireland, the Channel Islands and certain overseas territories, whereas the LR file only covers England and Wales.

Mike

__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro articles, tips and downloads

RE: Use a separate file, file type or other method to access nearly 29 million addresses

As I announced above, I wanted to test how the FINDSTR command can be used for very large text files.

I've done that now.

Since I don't know the original file with the postcodes, I used the "land registry" file for testing.
I renamed it to 1.scv and then I duplicated it 38 times with a small VFP program.

CODE -->

*Makemore.pgr
*This programm multiplies a csv-file
CLEA

FOR i = 1 TO 38
	filnam = ALLTRIM(STR(i))+".csv"
	COPY FILE p1.CSV TO &filnam
ENDFOR
DIR *.* 

That left me with a bunch of 38 * .csv files
corresponding to about 29.6 million rows or records.

Each * .csv file has a volume of 132 MB so that the following batch had to search for 38 * 132 MB = in approx. 5 GB.

The batch is very easy:

CODE --> Lookup.bat

rem This batch lookup.bat uses the command FINDSTR and looks into 38 *.csv-files (one after another)
rem look for this string in File 38, Line 24 
rem which is this one: {B5xxxFFD-E965-7D01-E053-6C04A8C036B6},"116500","2020-09-18 00:00","DE1 3QD","F","N","L","5","","CITY WALK","","DERBY","CITY OF DERBY","CITY OF DERBY","A","A"

rem the total volume = 38 * 780488 rows = 29,658,544 records


echo "Started: " %time% >>resultcsv.txt
findstr "B5xxxFFD-E965-7D01-E053-6C04A8C036B6" *.csv >>resultcsv.txt
echo "Ended: " %time% "End-Time">>resultcsv.txt 

I only manually changed one sentence in one of all the files (file 38.csv), creating a unique one that needed to be looked for.
String begins with B5xxxFFD in the batch.

I have redirected the result to a text file - and in my opinion that FINDSTR command is very fast.
Here is the result:

CODE -->

"Started: " 11:45:42,49 
38.csv:"{B5xxxFFD-E965-7D01-E053-6C04A8C036B6},""116500"",""2020-09-18 00:00"",""DE1 3QD"",""F"",""N"",""L"",""5"","""",""CITY WALK"","""",""DERBY"",""CITY OF DERBY"",""CITY OF DERBY"",""A"",""A"""
"Ended: " 11:45:48,82 "End-Time" 

As you can see - with one single batch-command only you can find a certain string in about 6 seconds - within a volume of 29 mio records.

Of course, it depends on what / and how often / and how structured the large amount of * .csv material is needed
If you only compare strings, then you can get those big Text-files in their own format to evaluate them this way -
the result is then easy and the small result-file can then be used for further purpose by VFP.

I have not yet tested whether you have to divide the text file into smaller portions - but there will definitely be comments here.


I am curious if VFP searching in its own divided and indexed *.dbf-Files can exceed the search speed of my batch.


Klaus


RE: Use a separate file, file type or other method to access nearly 29 million addresses

Mike, quoting Steve's original code

CODE --> VFP

APPEND FROM "pp-complete_"+LTRIM(STR(a))+".csv" DELIMITED WITH , WITH CHARACTER , 
directs to the Price Paid files ("pp-complete.csv" is the name of the Single File mentioned in the link Steve provided).

Also in the original post:

Quote (Steve Williams)

The purpose of this project is for someone to be able to enter a postcode, search for it and the address will be auto filled elsewhere.

Steve, you may consider using the Here platform live data for that. One of its resources is the suggestion of addresses given structured or non-structured queries. For instance, the last address in the 2020 list is this line

CODE --> CSV

{9DBAD221-97BA-6EB3-E053-6B04A8C0F257},72500,2020-01-14 00:00,"S21 4GS","F","N","L","5","","SPRINGFIELD CLOSE","ECKINGTON","SHEFFIELD","NORTH EAST DERBYSHIRE","DERBYSHIRE","A","A" 

Using the overHere library as an interface to the Here platform resources, you could fetch something like this



(the coverage of the Here platform is worldwide, so we can see suggestions from other parts of the globe, but you may restrict a query to a single country, for instance).

If you're interested, the overHere library is at https://github.com/atlopes/overHere.

RE: Use a separate file, file type or other method to access nearly 29 million addresses

Quote (Chris)

Mike, no idea what you talk about

Yes, a lot of people tell me that. Sometimes I've no idea myself.

My point was that the record structure that Steve quoted does not look like it being part of the LR (PPD) file. It is closer to that of the PAF (although not identical), although of course his application might well involve linking the two files in some way.

Mike

__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro articles, tips and downloads

RE: Use a separate file, file type or other method to access nearly 29 million addresses

I now gave it a shot and downloaded just pp-monthly-update-new-version.csv

The file is 16MB, read into a cursor it becomes 190 MB with this structure:

CODE

CREATE CURSOR Import(;
guid V(38),;
Price I,;
TransferDate D,;
Postcode V(254) null,;
PropertyType V(1),;
OldNew V(1),;
Duration V(1),;
PAON V(254) null,;
SAON V(254) null,;
Street V(254) null,;
Locality V(254) null,;
City V(254) null,;
District V(254) null,;
County V(254) null,;
Category V(1),;
RecordStatus V(1)) 

Atlopes used V(100), i think when you now use Tamars approach this can be shortend a bit more. Indeed Max(len(alltrim(city))) is just 22.

Anyway, now applying my recommended normalization you can get this from the City field:

CODE

SELECT distinct CAST(0 as int) as Id, City FROM myAlias INTO cursor cities readwrite
UPDATE cities SET id = RECNO() 

That's 1123 records of the original 95312. Replacing the City V(254) with an int field you save 95312*250 bytes, that's 22.7 MB, the new column size is just 95312*4 bytes. And the cities table takes about 285KB, but as it's a separate DBF that's of no interest until you have so many cities, that this table would exceed 2GB. Very unlikely, even in the future, there's a finite geographical limit...

With Tamars method I could reduce the city field to 22 and save 95312*232 bytes, without an extra table. The difference, 95312*18 bytes is 1MB, so the 285KB for an extra cities table are well spent, you just still have all text data in the one dbf, which might be worth the price.

I'm not comparing this to the solution of a single memo for all address parts, but that would still have all city names in full and as often as they occur.

The potential of savings is less for street names, but high again for post codes, I'm not comparing normalization to Memo solution as I'd need much work to get there, anyway you see how that helps reduce the sizes by using multiple tables instead of just one addresses dbf.

And btw, I let Excel import the data including to determine data types from all data. Then the full range of data can be turned to a VFP array with

CODE

#Define xlLastCell 11

Create Cursor Import(....see atlopes cursor definition)...)

WITH oExcel.ActiveWorkBook.ActiveSheet
  loLastCell = .Cells.SpecialCells( xlLastCell )

  laData = .Range( .Cells(2,1), m.loLastCell ).Value
ENDWITH

Insert into import From Array laData 

This can also be used to split one large EXcel sheet of 13 GB csv to arrays with say 100 lines each and insert them into an import cursor to then forward the data to the normalized table schema portion by portion.

Chriss

RE: Use a separate file, file type or other method to access nearly 29 million addresses

Thanks Mike, that explains it.

Steve, could you really point out which data you used or is the source lost in your (perhaps deleted) browser history?
Where add01 to add05 proposed column names or did you name them this way in your import dbf or created them while splitting the data?

Chriss

RE: Use a separate file, file type or other method to access nearly 29 million addresses

Chris,
you wrote:

Quote:

Klaus' idea isn't bad, because the search for parts of interest is done by the shell command findstr, which doesn't have that 2GB limit.
The output it creates usually will be much shorter (if you don't search for something as short as a single letter) and can be read from VFP.
What I think is not good with this solution is the performance

Quote (Klaus question)


Is that still a bad performance for you when I found (as you can see above) a certain string was found in about 6 seconds - within a volume of 29 mio records ?

Klaus

Peace worldwide - it starts here...

RE: Use a separate file, file type or other method to access nearly 29 million addresses

If you want to do something like intellisense for addresses 6 seconds is too long, but in comparison to looking into 29 million index cards it's fast.

Chriss

RE: Use a separate file, file type or other method to access nearly 29 million addresses

Mike:
We talked about text-editors and their capacity long time ago and today.
Meanwhile I found this huge one

https://www.ultraedit.com/


Unfortunately it costs money, but I once downloaded a trial version (30 days) for free - and even 10 GB files were no problem for this editor, even when searching.

Just for your information.
Time goes on and on .....

Regards
Klaus

Peace worldwide - it starts here...

RE: Use a separate file, file type or other method to access nearly 29 million addresses

Chris

Thanks for your answer (Defintion of fast or not), which has convinced me.

I am very sorry that Steve didn’t answer your last questions by now.

I would have loved to know what the search speed is at
the multi-VFP files after you had already started the data normalization.

Klaus

Peace worldwide - it starts here...

RE: Use a separate file, file type or other method to access nearly 29 million addresses

Klaus, thanks for your empathy,

I could just take the PPD data and extract something from it.
But even if this was Steve's data source or he would simply upload his data somewhere, I don't really have the time to work on this.

Google shows autcompletion - in their case rqather suggestions - are possible with their search engine.

Royal mail has this demo about address completion:
https://addressnow.royalmail.com/demo/

Chriss

RE: Use a separate file, file type or other method to access nearly 29 million addresses

And regarding editors and >2GB or >4G files.

While notepad.exe chokes at large files, there's an out of the box alternative in Windows with Wordpad. It shows txt or doc files top while it still loads. I admit I haven't tried it with a 4GB file, but since we now have that much RAM, any 64bit editor software could be used to search in a large file.

Chriss

RE: Use a separate file, file type or other method to access nearly 29 million addresses

(OP)
Guys
I’m not sure what questions you wanted me to answer but please be assured, I’m not intentionally ignoring you it’s just that I’ve not had time to dip into this thread.
Give me a couple of days and I’ll respond accordingly.

Thank you

Steve Williams
VFP9, SP2, Windows 10

RE: Use a separate file, file type or other method to access nearly 29 million addresses

(OP)
Chris:

Quote:

Steve, could you really point out which data you used or is the source lost in your (perhaps deleted) browser history?

Web page for the CSV file here: https://www.gov.uk/government/statistical-data-sets/price-paid-data-downloads


Direct download here: http://prod.publicdata.landregistry.gov.uk.s3-website-eu-west-1.amazonaws.com/pp-complete.csv

Quote:

Where add01 to add05 proposed column names or did you name them this way in your import dbf or created them while splitting the data?

Yes. I originally imported one of the split CSV files via VFP9 using Tools > Wizards > Import. The columns came out as column1, column2 etc so I renamed them accordingly. Prior to this, I deleted some of the columns in the CSV file as they were not needed.

If there is anything else I can answer, then please let me know (there maybe a delay replying).

Thank you

Steve Williams
VFP9, SP2, Windows 10

RE: Use a separate file, file type or other method to access nearly 29 million addresses

Well, okay, that makes it hard to know what ended up as adr1-adr5 in your DBF. If you did multiple things with third party tools, remove combine columns etc and don't have all steps reproducible we (I) can only start with price paid data and see what I can make of that.

I'll give it a try to normalize this as I think will work, I just wonder how you get to 13GB when the download is "just" 4.3GB. Your "aggregation" of the data must have introduced a lot of redundancy or were you just using long char fields, like c(254)?

Anyway, it'll take me a while, Sunday is already planned for other things and during the week I can't make that a fulltime job, so maybe next weekend, I think that's okay since you stopped working on it and so don't need it urgent, perhaps not at all.

Chriss

RE: Use a separate file, file type or other method to access nearly 29 million addresses

(OP)
Chriss

Thank you for your post. You mentioned:

Quote:

Anyway, it'll take me a while, Sunday is already planned for other things and during the week I can't make that a fulltime job, so maybe next weekend, I think that's okay since you stopped working on it and so don't need it urgent, perhaps not at all.

Perhaps it's best for me to respond with "not at all".

Please don't waste your time on my behalf.

Best wishes

Thank you

Steve Williams
VFP9, SP2, Windows 10

RE: Use a separate file, file type or other method to access nearly 29 million addresses

It's okay. I just want to show how normalization helps to reduce the storage space, and the calculation is quite simple, so I spare myself the effort of doing it, just show how much you can save.

Just as posted somewhere above 30 million addresses = records mean about 64 bytes per record. Even with short but often occurring city names like LONDON you need 6 bytes in a blocksize 0 memo field, 22 bytes in an optimized city char field (as I found out the longest city name is 22 chars long) but only 4 bytes as a foreign key. I already stated that simple reason.

I showed earlier how the city table is built:

CODE

SELECT distinct CAST(0 as int) as Id, City FROM import INTO cursor cities readwrite
UPDATE cities SET id = RECNO()
copy cities To cities.dbf 

When the integer IDs are given, you can determine the ID of a city by

CODE

SELECT id as cityid From cities where alltrim(city) == alltrim(import.city) 
That's how easy foreign keys are determined, which is maybe the step you thought is too complicated.

I'll nevertheless play with this and a few more ideas. But I'll start my own thread to show my results.






Chriss

RE: Use a separate file, file type or other method to access nearly 29 million addresses

Hello Chris,
I'm curious what your own announced thread will look like - I'm looking forward to it.
I have one more question about this note that you wrote above in this thread:

Quote (Chris Miller)

SELECT distinct CAST(0 as int) as Id, City FROM myAlias INTO cursor cities readwrite
UPDATE cities SET id = RECNO()

What is "SELECT distinct CAST" - is CAST a special field which I could not discover in your structure for your import-cursor/myAlias or is that a command which I did not find in VFP-help?
Sorry if that is a stupid question .
But I don't have any peace of mind if I don't understand something.
Please help...

Klaus

Peace worldwide - it starts here...

RE: Use a separate file, file type or other method to access nearly 29 million addresses

Distinct is just selecting each city name once, it has no direct reference to CAST, distinct is about the whole record and not just the first field or expression following.

CAST in itself is a VFP function for converting a type, CAST(0 as int) ensures the field type is int and not N(1) or such nonsense VFP could also create to store the 0.

Chriss

RE: Use a separate file, file type or other method to access nearly 29 million addresses

Klaus, I can understand your confusion. CAST() is a peculiar function. Instead of having a list of parameters between the parentheses (zero, one or more parameters), it has this expression AS datatype syntax, which is completely non-standard.

I wonder why the VFP team didn't give us something like T-SQLs CONVERT() function, which does pretty well the same thing as CAST().

Mike

__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro articles, tips and downloads

RE: Use a separate file, file type or other method to access nearly 29 million addresses

Mike,

I would also like to have an extended TRANSFORM() that can convert between any two types (as long as it makes sense), like CONVERT() does. But CONVERT is Microsofts' proprietary SQL dialect, while T-SQL also supports the standard CAST() CONVERT does a lot of conversions better, which is quite logical as MS knows the binary structures of its database types in and out.

For ANSI SQL the CAST() syntax is a standard. VFP SQL also supports BETWEEN A and B, though it has its BETWEEN() function and you cannot use BETWEEN A AND b outside of a query.

The more peculiar thing is you can use CAST(value as type) outside of a SQL query, which I think no other programming language has. Not that it is of any use. But within a query, it helps define result field types strictly.

Chriss

Red Flag This Post

Please let us know here why this post is inappropriate. Reasons such as off-topic, duplicates, flames, illegal, vulgar, or students posting their homework.

Red Flag Submitted

Thank you for helping keep Tek-Tips Forums free from inappropriate posts.
The Tek-Tips staff will check this out and take appropriate action.

Reply To This Thread

Posting in the Tek-Tips forums is a member-only feature.

Click Here to join Tek-Tips and talk with other members! Already a Member? Login

Close Box

Join Tek-Tips® Today!

Join your peers on the Internet's largest technical computer professional community.
It's easy to join and it's free.

Here's Why Members Love Tek-Tips Forums:

Register now while it's still free!

Already a member? Close this window and log in.

Join Us             Close