class %UnitTest.TestSqlScript
extends %UnitTest.TestScript
Adds methods to test an SQL script.
Summary: To test SQL code, you will:
-
Create a namespace, a database, and create a dataset.
-
Populate the new database from the dataset.
-
Create a test file that contains SQL select statements that test your feature.
-
Run the test file to produce a reference file (using the "reference" argument).
-
Add methods to your test file that
-
Clear the database and populate the database from the dataset.
-
Produce results to output.log.
-
Compare output.log to reference.log, and
-
Report whether the two files are the same (Passed) or different (Failed).
Each time that you run this test, your database always begins from a consistent, known state.
Procedure
(Note: This procedure is under construction.)
To set up a namespace, a database, and a dataset:
-
Create a namespace and a database called UT.
-
Create a class, such as dev.xyz.classname, where dev.xyz matches perforce directory structure,
such dev.sql.Query or dev.obj. (Note that, within your test, you will specify this name with underscores, as
dev_xyz.classname).
-
Set this class to extend %Populate and %XML.Adapter.
-
Open a Cache terminal, change to the UT namespace and populate this class with sufficient data with the Populate utility.
zn "ut"
do ##class(UT.classname).Populate(100)
-
In Studio, select Tools > Export. Select Add and the UT.classname.cls to an XML file data.xml in dev.xyz.
To create a test that compares a file called
output.log to a file called
reference.log.
-
Create a new class that extends TestSqlScript called, for example, TestSqlScriptX and link it to the sccs.
-
Put it in the dev.xyz directory. [Susan - however this works best]
-
Add instance methods to the new class.
First clear the database and repopulate the database from the data.xml file:
Method OnBeforeAllTests
delete * from dev_xyz.classname
import data.xml
[find a statement to import the XML]
Now the database is in a consistent known state.
Method Testdevlog-id
[how to automatically take from the devlog what we need or put the URL in the comment]
- Create a file called script.txt in your test directory, containing only SQL select code you want to test, such as the following.
Note that the four pound signs (####) is a command delimiter. This is needed since it is not unusual for SQL statements to be longer than a single line.
select max(SSN) from Sample.Person
####
select count(ID) from Sample.Person
####
- Run the test.
d ##class(%UnitTest.Manager).RunTest()
Each line of SQL in script.txt is executed and command output is written to output.log until the end of file in script.txt is reached. output.log is compared to reference.log.
The first time that you run the test, you get an error because the reference.log file doesn't yet exist.
- Check that the contents of new output.log file are correct.
- If they are, create a reference.log file that will be used in future runs of this test by copying output.log and to reference.log. Now your two files match.
- Run the test again
d ##class(%UnitTest.Manager).RunTest()
This time the test passes.
Every time you run your test the new output is written to
output.log.
output.log is compared to
reference.log.
If the files match, the test passes. If the files do not match, the test fails.
If the output changes, you need to update the
reference.log file.
parameter DISPLAYELAPSEDTIME = " executed and fetched in ";
DISPLAYELAPSEDTIME enables you to internationalize the display of elapsed time,
when TIMESCALE is a positive number.
parameter DISPLAYSECONDS = "seconds";
DISPLAYSECONDS enables you to internationalize the spelling of "seconds" in
the output of elapsed time when TIMESCALE is a positive number.
parameter DISPLAYTESTNAME = "Test ";
DISPLAYTESTNAME enables you to internationalize the display of the test name,
which may be specified after the #### delimiter in the test file, when TIMESCALE
is a positive number.
parameter SHOWPLAN;
Set SHOWPLAN=1 to dump the access plans used by each test in the test script.
Use this feature when you need to verify plan stability from one release to the next, or when you
want to investigate the impact of creating, dropping, or altering an index.
parameter TIMESCALE;
The TIMESCALE parameter enables you to control whether elapsed time information
is displayed for each SQL statement, and the number of significant digits used to display the time.
This can be used to determine whether significant differences in elapsed time are consumed by any
individual query. By default, TIMESCALE="", which will prevent elapsed time information from being
displayed at the end of each SQL statement in the test. Setting DISPLAYSECONDS to 1 will display the result
in seconds. Setting DISPLAYSECONDS to 10 will display elapsed time in multiples of 10 seconds. Setting
TIMESCALE to 0.1 will display elapsed time to the nearest tenth second. By grouping together a set of
tests that each take about the same amount of time you can automate the process of determining if any
query takes "significantly" more or less time than it did previously.
method OnBeforeAllTests()
as %Status
Run by RunTest once before any test methods in the test class are run. Can be used to set up a
test environment that will be later cleaned up by OnAfterAllTests.
Example: Setup and Cleanup of an environment:
Method OnBeforeAllTests() As %Status
{
//do setup stuff here
set ^inputMessage = "input message"
quit $$$OK
}
Method OnAfterAllTests() As %Status
{
//do clean up stuff here
kill ^inputMessage
quit $$$OK
}
classmethod runScript(scriptfile As %String, outputfile As %String)
as %Status
classmethod runSqlStatement(sql As %String, testName As %String)