Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Shaun E on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Score test methods

Status
Not open for further replies.

hkaing79

Technical User
Jul 26, 2004
234
US
I have two tables:
tblStudentScore:
StudentID, TestID, a1_01, ... , a6_20

tblBenchmark:
TestID, ScorePosition, Benchmark

tblStudentScore has a lot of fields that are blank because different TestID tests different things. So Test 1 can have a1_01 but Test 2 does not.

I took two approaches. The first approach, I used DLookup to compare the values. The second approach, I created a crosstab of tblBenchmark, then I subtracted the two fields.

The first approach got rid of all the blank fields, but it transposed my table and increased the number of records from 8,000 to 300,000. So it takes forever to perform any average calculation.

The second approach retains the 8,000 records, but there's 50+ fields that I have to type a1_01 - a1_01. Also, in the future, the number of fields can change. The second approach isn't as flexible.

Does anyone have another approach that combines the flexibility of approach 1 while retaining the same # of records?
 
a1_01 represents a subtest, i.e. Test = Math, Subtest = Number Sense. The number of subtests vary from test to test, i.e. Grade 1 will have 3 subtests, but Grade 2 will have 4.

Because normalizing the data substantially increases the table, I can't think of any other way other than to work with the denormalized table.
 
So it takes forever to perform any average calculation.

Perhaps you should show us what your "normalized" design looks like, and some sample data. Did you apply appropriate indexes? What is the average you are calculating? Show us your sql statement using the "normalized" approach
Also, in the future, the number of fields can change.
revisit the "normalized" approach

Because normalizing the data substantially increases the table
You need to prove to us that table size is the problem. Anyhow, if it was the problem, a better approach(?) would be to normalize then denormalize by using some sort of aggregated level. But i'm speculating 'cause i know little about your data.

Cheers,
dan
 
For one particular Reading program, we have ~8,000 students. They will take 4-5 unit tests. Kindergarten only has 2 unit tests that are placed in unit 3 and unit 6. Unit 6 for the other grades store cumulative score, so I'm only interested in units 1-4 (we tend not to reach our 5th unit test).

The database (original format) is:
tblScores
StudentID | School | Grade | a1_01 | a1_02 | ... | a6_09 | a6_10

I have another table that tells me what a1_01 (a1 is assessment 1, 01 is first standard) is:
tblStandards
Grade | Unit | Score_Position | Name | Total | Benchmark
for example:
1 | 1 | a1_01 | Writing | 8 | 6
For a First Grader, the first standard is called Writing with 8 total points, of which 6 is needed to pass.

So, in my first approach, I used DLookup to normalize the table with this query:

SELECT tblScores.School, tblScores.Grade, tblStandards.Score_Position, DLookUp([Score_Position],"tblScores","[StudentID] = " & [StudentID]) AS Score, tblStandards.Benchmark
FROM tblScores INNER JOIN tblStandards ON tblScores.Grade= tblStandards.Grade
WHERE DLookUp([Score_Position],"tblScores","[StudentID] = " & [StudentID])) Is Not Null));

This will give me a table that looks like:
School | Grade | Score_Position | Score | Benchmark

So, I went from 60 fields and 8,000 records to 5 fields and 250,000 records.

The second approach, I created a crosstab of tblStandards. I then drag all the fields in the crosstab of tblStandards and compare it to tblScores, which also is time consuming and a lot of null values because different grades have different standards. The null values make it difficult to create a report.

I actually do normalize and denormalize in approach 1. After normalizing the data, I create a crosstab where the Score_Position is the Column Header, AverageOfScore is the value, and School, Program, and Unit are the Row Header. However, this also causes empty columns. :( So, I've resorted to copying and pasting in Excel to create the reports.

I've basically given up and attribute this problem to the Reading Program. What I wanted to do was compare the results across grades if the grades take the same test, i.e. Grades 1-5 all take Writing and Spelling but Grades 2-5 all take Reading, Vocabulary, and Checking Skills. And to throw another wrench, Grade 1 begins to take Reading in their 4th Unit. I guess that's when they're transitioned to start taking the same tests as Grades 2-5.
 
I used DLookup to normalize the table with this query
No, this isn't normalizing.
I actually do normalize and denormalize in approach 1.
No, you haven't :)
I've basically given up and attribute this problem to the Reading Program.
Actually, I attribute the problem to poor table structure - try normalizing it!

What you want to do is normalize the physical schema, NOT by creating views (queries) then pretending the views are your physical schema. You gain nothing by doing this - actually it would probably cost you in performance.

In other words, physically re-arrange the tables such that they represent a sound (appropriate) relational schema.

If you are getting the data sent to you via each school, then you should arrange the data to suit your needs while (or prior to) importing the data into your RDBMS.

Does this make sense?

Cheers,
Dan
 
Does my representation resemble normalized data?

I actually used Make Table query to physically rearrange the data, which took well over half an hour. I'm not impatient or anything, but I am trying to make this database as user-friendly as possible. So, for someone who isn't computer savvy, this half an hour may lead the user to think Access crashed and shuts down the program prematurely. That's why I'm trying to find a simpler process, rather than running various queries.

I have a weak understanding of normalized data. Can my data even be normalized? There's only School, StudentID, Grade, and the various Standards. Isn't the point of normalizing data to reduce redundant fields, i.e. a1_01...a6_10 is bad design, so collapse it? That's why I used DLookup to collapse my data, which also increased the # of records.

I don't want to split the table by school or by grades because I'll be comparing scores across school and across grades.
 
"I actually used Make Table query to physically rearrange the data, which took well over half an hour"

ok, its becoming a little clearer now. So, the problem you are having is trying to get the data into some sort of structure that is suitable for reporting?

With your first table:
StudentID | School | Grade | a1_01 | a1_02 | ... | a6_09 | a6_10

Why not turn it into something like:
[tt]StudentID | School | Grade | assessment | standard | score
1 A 1 a1 1 5
1 A 1 a1 2 3
...[/tt]

this is not normalized, since you only need the data for reporting purposes (?) ie, not for data entry etc.

I would split "a1_01" into its separate components {assessment, standard} if you are wanting reports by them independantly.

you could populate the new table using a bit of vba code:
Code:
Public Function PopulationNewtable()
  Dim strSQL As String
  Dim i      As Long
  Dim j      As Long
  Dim strScore As String
   
  For i = 1 To 6
    For j = 1 To 10
     strScore = "a" & i & "_" & Format(j, "00")
     strSQL = "INSERT INTO NEW_DATA ( student_id, school, grade, assessment, standard, score ) " & _
              "  SELECT O.student_id, O.school, O.grade, " & _
              "          'a" & i & "' as assessment, " & j & " as standard, " & _
              "         O." & strScore & " as score " & _
              "  FROM OLD_DATA O " & _
              "WHERE (((O." & strScore & ") Is Not Null));"
     CurrentProject.Connection.Execute strSQL
    Next j
  Next i

the above bit of code assume your old table name is "OLD_DATA" and the new table name is "NEW_DATA". It simply loops through the "a1_01" to "a6_10" fields and uses an "append" query to transfer the table from the old table to the new table (ignoring nulls). For me, it took about 6 seconds with the old table having 63 fields and 8000 records.

cheers,
dan
 
WOW. That's great! There's a big difference between using the Make Table query and VB. Thanks dan.
 
you're welcome.

I think much of the slowness you were experiencing was due the use of dlookup(). The vba code just avoids you from having to run 60 append queries manually :)

cheers,
dan
 
HELP!!! I found this thread, and it is EXACTLY what I am trying to do. Below is the code I am trying to use. I get "Syntax error on INSERT INTO" when I try to use it. My "old" table is tblScores and my new table is tblTmp. What am I doing wrong???

Thanks in advance!!
Steph


Public Function PopulateNewTable()

Dim strSQL As String
Dim i As Long
Dim j As Long
Dim strScore As String

For i = 1 To 6
For j = 0 To 4
strScore = "No" & i & "YR" & j
strSQL = "INSERT INTO tblTmp ( StuNo, Score, Year, Goal ) " & _
" SELECT O.StudentNo, " & _
" O." & strScore & " as Score, " & _
" " & j & " as Year, " & i & " as Goal " & _
" FROM tblScores O " & _
"WHERE ((O." & strScore & ") is not null);"


CurrentProject.Connection.Execute strSQL
Next j
Next i

End Function

 
Thanks, I tried that and still get the same error!

Steph
 
Try this:

strSQL = "INSERT INTO tblTmp (StuNo, Score, [Year], Goal) " & _
"SELECT O.StudentNo, " & _
"O." & strScore & " as Score, " & _
j & " as Year, " & i & " as Goal " & _
"FROM tblScores O " & _
"WHERE (O." & strScore & " Is Not Null)"

John Borges
 
Thanks....I figured it out....it was the CurrentProject.Connection.Execute strSQL line that was my problem. Changed it to docmd.runSQL and it works like a charm!

Thanks for answering!!

Steph
 
CurrentDb.Execute also works. Don't quite know the difference between the various means.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top