👤
Home Man
Search
Today's Posts
Register

Linux & Unix Commands - Search Man Pages
Man Page or Keyword Search:
Select Section of Man Page:
Select Man Page Repository:

Linux 2.6 - man page for test_server_ctrl (linux section 3erl)

test_server_ctrl(3erl)		     Erlang Module Definition		   test_server_ctrl(3erl)

NAME
       test_server_ctrl - This module provides a low level interface to the Test Server.

DESCRIPTION
       The test_server_ctrl module provides a low level interface to the Test Server. This inter-
       face is normally not used directly by the tester, but through a framework built on top  of
       test_server_ctrl .

       Common  Test  is  such  a framework, well suited for automated black box testing of target
       systems of any kind (not necessarily implemented in Erlang). Common Test is  also  a  very
       useful  tool  for  white  box testing Erlang programs and OTP applications. Please see the
       Common Test User's Guide and reference manual for more information.

       If you want to write your own framework, some more information can be found in the chapter
       "Writing  your  own  test server framework" in the Test Server User's Guide. Details about
       the interface provided by test_server_ctrl follows below.

EXPORTS
       start() -> Result
       start(ParameterFile) -> Result

	      Types  Result = ok | {error, {already_started, pid()}
		     ParameterFile = atom() | string()

	      This function starts the test server. If the parameter file is given, it	indicates
	      that  the  target  is  remote. In that case the target node is started and a socket
	      connection is established between the controller and the target node.

	      The parameter file is a text file containing key-value tuples. Each tuple  must  be
	      followed by a dot-newline sequence. The following key-value tuples are allowed:

		{type,PlatformType} :
		  This is an atom indicating the target platform type, currently supported: Plat-
		  formType = vxworks
		  Mandatory

		{target,TargetHost} :
		  This is the name of the target host, can be atom or string.
		  Mandatory

		{slavetargets,SlaveTargets} :
		  This is a list of available hosts where slave nodes can be started.  The  host-
		  names are given as atoms or strings.
		  Optional, default SlaveTargets = []

		{longnames,Bool} :
		  This	indicates  if longnames shall be used, i.e. if the -name option should be
		  used for the target node instead of -sname
		  Optional, default Bool = false

		{master, {MasterHost, MasterCookie}} :
		  If target is remote and the target node is started as a slave node, this option
		  indicates which master and cookie to use. The given master will also be used as
		  master for slave nodes started with test_server:start_node/3 . It  is  expected
		  that	 the   erl_boot_server	 is   started  on  the	master	node  before  the
		  test_server_ctrl:start/1 function is called.
		  Optional, if not given the test server controller node is used  as  master  and
		  the erl_boot_server is automatically started.

       stop() -> ok

	      This  stops  the test server (both controller and target) and all its activity. The
	      running test suite (if any) will be halted.

       add_dir(Name, Dir) -> ok
       add_dir(Name, Dir, Pattern) -> ok
       add_dir(Name, [Dir|Dirs]) -> ok
       add_dir(Name, [Dir|Dirs], Pattern) -> ok

	      Types  Name = term()
		     The jobname for this directory.
		     Dir = term()
		     The directory to scan for test suites.
		     Dirs = [term()]
		     List of directories to scan for test suites.
		     Pattern = term()
		     Suite match pattern.  Directories	will  be  scanned  for	Pattern_SUITE.erl
		     files.

	      Puts  a  collection  of suites matching (*_SUITE) in given directories into the job
	      queue. Name is an arbitrary name for the job, it can be any erlang term. If Pattern
	      is given, only modules matching Pattern* will be added.

       add_module(Mod) -> ok
       add_module(Name, [Mod|Mods]) -> ok

	      Types  Mod = atom()
		     Mods = [atom()]
		     The name(s) of the module(s) to add.
		     Name = term()
		     Name for the job.

	      This  function  adds  a module or a list of modules, to the test servers job queue.
	      Name may be any Erlang term. When Name is not given, the job gets the name  of  the
	      module.

       add_case(Mod, Case) -> ok

	      Types  Mod = atom()
		     Name of the module the test case is in.
		     Case = atom()
		     Function name of the test case to add.

	      This  function  will  add one test case to the job queue. The job will be given the
	      module's name.

       add_case(Name, Mod, Case) -> ok

	      Types  Name = string()
		     Name to use for the test job.

	      Equivalent to add_case/2 , but the test job will get the specified name.

       add_cases(Mod, Cases) -> ok

	      Types  Mod = atom()
		     Name of the module the test case is in.
		     Cases = [Case]
		     Case = atom()
		     Function names of the test cases to add.

	      This function will add one or more test cases to the job queue.  The  job  will  be
	      given the module's name.

       add_cases(Name, Mod, Cases) -> ok

	      Types  Name = string()
		     Name to use for the test job.

	      Equivalent to add_cases/2 , but the test job will get the specified name.

       add_spec(TestSpecFile) -> ok | {error, nofile}

	      Types  TestSpecFile = string()
		     Name of the test specification file

	      This  function will add the content of the given test specification file to the job
	      queue. The job will be given the name of the test specification file, e.g.  if  the
	      file is called test.spec , the job will be called test .

	      See the reference manual for the test server application for details about the test
	      specification file.

       add_dir_with_skip(Name, [Dir|Dirs], Skip) -> ok
       add_dir_with_skip(Name, [Dir|Dirs], Pattern, Skip) -> ok
       add_module_with_skip(Mod, Skip) -> ok
       add_module_with_skip(Name, [Mod|Mods], Skip) -> ok
       add_case_with_skip(Mod, Case, Skip) -> ok
       add_case_with_skip(Name, Mod, Case, Skip) -> ok
       add_cases_with_skip(Mod, Cases, Skip) -> ok
       add_cases_with_skip(Name, Mod, Cases, Skip) -> ok

	      Types  Skip = [SkipItem]
		     List of items to be skipped from the test.
		     SkipItem = {Mod,Comment} | {Mod,Case,Comment} | {Mod,Cases,Comment}
		     Mod = atom()
		     Test suite name.
		     Comment = string()
		     Reason why suite or case is being skipped.
		     Cases = [Case]
		     Case = atom()
		     Name of test case function.

	      These functions add test jobs just  like	the  add_dir,  add_module,  add_case  and
	      add_cases  functions  above, but carry an additional argument, Skip. Skip is a list
	      of items that should be skipped in the current test run. Test job items that  occur
	      in the Skip list will be logged as SKIPPED with the associated Comment.

       add_tests_with_skip(Name, Tests, Skip) -> ok

	      Types  Name = term()
		     The jobname for this directory.
		     Tests = [TestItem]
		     List of jobs to add to the run queue.
		     TestItem = {Dir,all,all} | {Dir,Mods,all} | {Dir,Mod,Cases}
		     Dir = term()
		     The directory to scan for test suites.
		     Mods = [Mod]
		     Mod = atom()
		     Test suite name.
		     Cases = [Case]
		     Case = atom()
		     Name of test case function.
		     Skip = [SkipItem]
		     List of items to be skipped from the test.
		     SkipItem = {Mod,Comment} | {Mod,Case,Comment} | {Mod,Cases,Comment}
		     Comment = string()
		     Reason why suite or case is being skipped.

	      This  function adds various test jobs to the test_server_ctrl job queue. These jobs
	      can be of different type (all or specific suites in one directory, all or  specific
	      cases  in  one  suite, etc). It is also possible to get particular items skipped by
	      passing them along in the Skip list (see the add_*_with_skip functions above).

       abort_current_testcase(Reason) -> ok | {error,no_testcase_running}

	      Types  Reason = term()
		     The reason for stopping the test case, which will be printed in the log.

	      When calling this function, the currently executing test case will be  aborted.  It
	      is  the user's responsibility to know for sure which test case is currently execut-
	      ing. The function is therefore only safe to call from a  function  which	has  been
	      called (or synchronously invoked) by the test case.

       set_levels(Console, Major, Minor) -> ok

	      Types  Console = integer()
		     Level for I/O to be sent to console.
		     Major = integer()
		     Level for I/O to be sent to the major logfile.
		     Minor = integer()
		     Level for I/O to be sent to the minor logfile.

	      Determines  where  I/O  from  test suites/test server will go. All text output from
	      test suites and the test server is tagged with a priority value which ranges from 0
	      to 100, 100 being the most detailed. (see the section about log files in the user's
	      guide). Output from the test cases (using io:format/2 ) has a detail level  of  50.
	      Depending  on the levels set by this function, this I/O may be sent to the console,
	      the major log file (for the whole test suite) or to the minor logfile (separate for
	      each test case).

	      All output with detail level:

		* Less than or equal to Console is displayed on the screen (default 1)

		* Less than or equal to Major is logged in the major log file (default 19)

		* Greater than or equal to Minor is logged in the minor log files (default 10)

	      To view the currently set thresholds, use the get_levels/0 function.

       get_levels() -> {Console, Major, Minor}

	      Returns the current levels. See set_levels/3 for types.

       jobs() -> JobQueue

	      Types  JobQueue = [{list(), pid()}]

	      This function will return all the jobs currently in the job queue.

       multiply_timetraps(N) -> ok

	      Types  N = integer() | infinity

	      This  function  should  be  called before a test is started which requires extended
	      timetraps, e.g. if extensive tracing is used. All timetraps started after this call
	      will be multiplied by N .

       scale_timetraps(Bool) -> ok

	      Types  Bool = true | false

	      This function should be called before a test is started. The parameter specifies if
	      test_server should attempt to automatically scale the timetrap value  in	order  to
	      compensate for delays caused by e.g. the cover tool.

       get_timetrap_parameters() -> {N,Bool}

	      Types  N = integer() | infinity
		     Bool = true | false

	      This  function  may  be  called  to read the values set by multiply_timetraps/1 and
	      scale_timetraps/1 .

       cover(Application,Analyse) -> ok
       cover(CoverFile,Analyse) -> ok
       cover(App,CoverFile,Analyse) -> ok

	      Types  Application = atom()
		     OTP application to cover compile
		     CoverFile = string()
		     Name of file listing modules to exclude from or include  in  cover  compila-
		     tion. The filename must include full path to the file.
		     Analyse = details | overview

	      This function informs the test_server controller that next test shall run with code
	      coverage analysis. All timetraps will automatically be multiplied by 10 when  cover
	      i run.

	      Application and CoverFile indicates what to cover compile. If Application is given,
	      the default is that all modules in the ebin directory of the  application  will  be
	      cover compiled. The ebin directory is found by adding ebin to code:lib_dir(Applica-
	      tion) .

	      A CoverFile can have the following entries:

	      {exclude, all | ExcludeModuleList}.
	      {include, IncludeModuleList}.

	      Note that each line must end with a full stop.  ExcludeModuleList  and  IncludeMod-
	      uleList are lists of atoms, where each atom is a module name.

	      If both an Application and a CoverFile is given, all modules in the application are
	      cover compiled, except for the modules listed in ExcludeModuleList . The modules in
	      IncludeModuleList are also cover compiled.

	      If a CoverFile is given, but no Application , only the modules in IncludeModuleList
	      are cover compiled.

	      Analyse indicates the detail level of the cover analysis. If Analyse  =  details	,
	      each  cover compiled module will be analysed with cover:analyse_to_file/1 . If Ana-
	      lyse = overview an overview of all cover compiled modules is created,  listing  the
	      number of covered and not covered lines for each module.

	      If   the	 test	following   this  call	starts	any  slave  or	peer  nodes  with
	      test_server:start_node/3 , the same cover compiled  code	will  be  loaded  on  all
	      nodes.  If the loading fails, e.g. if the node runs an old version of OTP, the node
	      will simply not be a part of the coverage analysis. Note that slave or  peer  nodes
	      must  be stopped with test_server:stop_node/1 for the node to be part of the cover-
	      age analysis, else the test server will not be able to fetch coverage data from the
	      node.

	      When  the  test is finished, the coverage analysis is automatically completed, logs
	      are created and the cover compiled modules are unloaded. If another test is  to  be
	      run with coverage analysis, test_server_ctrl:cover/2/3 must be called again.

       cross_cover_analyse(Level) -> ok

	      Types  Level = details | overview

	      Analyse  cover  data  collected  from  all tests. The modules analysed are the ones
	      listed in the cross cover file cross.cover in the current  directory  of	the  test
	      server.

	      The  modules  listed  in	the cross.cover file are modules that are heavily used by
	      other applications than the one they belong to. This function should be  run  after
	      all  tests  are  completed,  and	the  result  will  be  stored  in  a  file called
	      cross_cover.html in the run.<timestamp> directory of the	application  the  modules
	      belong to.

	      The cross.cover file contains elements like this:

	      {App,Modules}.

	      where  App  can  be  an  application name or the atom all . The application (or all
	      applications) will cover compile the listed Modules .

       trc(TraceInfoFile) -> ok | {error, Reason}

	      Types  TraceInfoFile = atom() | string()
		     Name of a file defining which functions to trace and how

	      This function starts call trace on target and on	slave  or  peer  nodes	that  are
	      started or will be started by the test suites.

	      Timetraps  are  not extended automatically when tracing is used. Use multiply_time-
	      traps/1 if necessary.

	      Note that the trace support in the test server is in a  very  early  stage  of  the
	      implementation, and thus not yet as powerful as one might wish for.

	      The  trace  information file specified by the TraceInfoFile argument is a text file
	      containing one or more of the following elements:

		* {SetTP,Module,Pattern}.

		* {SetTP,Module,Function,Pattern}.

		* {SetTP,Module,Function,Arity,Pattern}.

		* ClearTP.

		* {ClearTP,Module}.

		* {ClearTP,Module,Function}.

		* {ClearTP,Module,Function,Arity}.

		SetTP = tp | tpl :
		  This is maps to the corresponding functions in the ttb module in  the  observer
		  application. tp means set trace pattern on global function calls. tpl means set
		  trace pattern on local and global function calls.

		ClearTP = ctp | ctpl | ctpg :
		  This is maps to the corresponding functions in the ttb module in  the  observer
		  application.	ctp means clear trace pattern (i.e. turn off) on global and local
		  function calls. ctpl means clear trace pattern on local function calls only and
		  ctpg means clear trace pattern on global function calls only.

		Module = atom() :
		  The module to trace

		Function = atom() :
		  The name of the function to trace

		Arity = integer() :
		  The arity of the function to trace

		Pattern = [] | match_spec() :
		  The  trace  pattern to set for the module or function. For a description of the
		  match_spec() syntax, please turn to the User's guide	for  the  runtime  system
		  (erts).  The chapter "Match Specification in Erlang" explains the general match
		  specification language.

	      The trace result will be logged in a (binary) file called  NodeName-test_server  in
	      the current directory of the test server controller node. The log must be formatted
	      using ttb:format/1/2 .

       stop_trace() -> ok | {error, not_tracing}

	      This function stops tracing on target, and on slave or peer  nodes  that	are  cur-
	      rently running. New slave or peer nodes will no longer be traced after this.

FUNCTIONS INVOKED FROM COMMAND LINE
       The  following  functions  are  supposed  to be invoked from the command line using the -s
       option when starting the erlang node.

EXPORTS
       run_test(CommandLine) -> ok

	      Types  CommandLine = FlagList

	      This function is supposed to be invoked from the commandline. It	starts	the  test
	      server, interprets the argument supplied from the commandline, runs the tests spec-
	      ified and when all tests are done, stops the test server and returns to the  Erlang
	      prompt.

	      The  CommandLine	argument  is  a  list  of  command line flags, typically ['KEY1',
	      Value1, 'KEY2', Value2, ...] . The valid command line flags are listed below.

	      Under a UNIX command prompt, this function can be invoked like this:
	      erl -noshell -s test_server_ctrl run_test KEY1 Value1 KEY2  Value2  ...  -s  erlang
	      halt

	      Or make an alias (this is for unix/tcsh)
	      alias erl_test 'erl -noshell -s test_server_ctrl run_test \!* -s erlang halt'

	      And then use it like this
	      erl_test KEY1 Value1 KEY2 Value2 ...

	      The valid command line flags are

		DIR dir :
		  Adds all test modules in the directory dir to the job queue.

		MODULE mod :
		  Adds the module mod to the job queue.

		CASE mod case :
		  Adds the case case in module mod to the job queue.

		SPEC spec :
		  Runs the test specification file spec .

		SKIPMOD mod :
		  Skips all test cases in the module mod

		SKIPCASE mod case :
		  Skips the test case case in module mod .

		NAME name :
		  Names  the  test  suite  to something else than the default name. This does not
		  apply to SPEC which keeps its names.

		PARAMETERS parameterfile :
		  Specifies the parameter file to use when starting remote target

		COVER app cover_file analyse :
		  Indicates that the test should be run with cover analysis. app , cover_file and
		  analyse corresponds to the parameters to test_server_ctrl:cover/3 . If no cover
		  file is used, the atom none should be given.

		TRACE traceinfofile :
		  Specifies a trace information file. When this option is given, call tracing  is
		  started  on  the  target node and all slave or peer nodes that are started. The
		  trace information file specifies which modules and functions to trace. See  the
		  function trc/1 above for more information about the syntax of this file.

FRAMEWORK CALLBACK FUNCTIONS
       A   test   server   framework   can   be  defined  by  setting  the  environment  variable
       TEST_SERVER_FRAMEWORK to a module name. This module will then be framework  callback  mod-
       ule, and it must export the following function:

EXPORTS
       get_suite(Mod,Func) -> TestCaseList

	      Types  Mod = atom()
		     Test suite name.
		     Func = atom()
		     Name of test case.
		     TestCaseList = [SubCase]
		     List of test cases.
		     SubCase = atom()
		     Name of a case.

	      This function is called before a test case is started. The purpose is to retrieve a
	      list of subcases. The  default  behaviour  of  this  function  should  be  to  call
	      Mod:Func(suite) and return the result from this call.

       init_tc(Mod,Func,Args0)	->  {ok,Args1} | {skip,ReasonToSkip} | {auto_skip,ReasonToSkip} |
       {fail,ReasonToFail}

	      Types  Mod = atom()
		     Test suite name.
		     Func = atom()
		     Name of test case or configuration function.
		     Args0 = Args1 = [tuple()]
		     Normally Args = [Config]
		     ReasonToSkip = term()
		     Reason to skip the test case or configuration function.
		     ReasonToFail = term()
		     Reason to fail the test case or configuration function.

	      This function is called before a test case or configuration function starts. It  is
	      called  on  the process executing the function Mod:Func . Typical use of this func-
	      tion can be to alter the input parameters to the test case function ( Args ) or  to
	      set properties for the executing process.

	      By  returning  {skip,Reason}  ,  Func  gets  skipped.  Func  also  gets  skipped if
	      {auto_skip,Reason} is returned, but then gets an auto skipped status  (rather  than
	      user skipped).

	      To fail Func immediately instead of executing it, return {fail,ReasonToFail}.

       end_tc(Mod,Func,Status) -> ok | {fail,ReasonToFail}

	      Types  Mod = atom()
		     Test suite name.
		     Func = atom()
		     Name of test case or configuration function.
		     Status = {Result,Args} | {TCPid,Result,Args}
		     The status of the test case or configuration function.
		     ReasonToFail = term()
		     Reason to fail the test case or configuration function.
		     Result = ok | Skip | Fail
		     The final result of the test case or configuration function.
		     TCPid = pid()
		     Pid of the process executing Func
		     Skip = {skip,SkipReason}
		     SkipReason = term() | {failed,{Mod,init_per_testcase,term()}}
		     Reason why the function was skipped.
		     Fail  =  {error,term()}  |  {'EXIT',term()} | {timetrap_timeout,integer()} |
		     {testcase_aborted,term()} | testcase_aborted_or_killed |  {failed,term()}	|
		     {failed,{Mod,end_per_testcase,term()}}
		     Reason why the function failed.
		     Args = [tuple()]
		     Normally Args = [Config]

	      This function is called when a test case, or a configuration function, is finished.
	      It is normally called on the process where the function Mod:Func has  been  execut-
	      ing,  but  if not, the pid of the test case process is passed with the Status argu-
	      ment.

	      Typical use of the end_tc/3 function can be to clean up after init_tc/3 .

	      If Func is a test case, it is possible to analyse the value  of  Result  to  verify
	      that init_per_testcase/2 and end_per_testcase/2 executed successfully.

	      It  is possible with end_tc/3 to fail an otherwise successful test case, by return-
	      ing {fail,ReasonToFail} . The test case Func will be logged as failed with the pro-
	      vided term as reason.

       report(What,Data) -> ok

	      Types  What = atom()
		     Data = term()

	      This function is called in order to keep the framework up-to-date with the progress
	      of the test. This is useful e.g. if  the	framework  implements  a  GUI  where  the
	      progress information is constantly updated. The following can be reported:

	      What = tests_start, Data = {Name,NumCases}
	      What = tests_done, Data = {Ok,Failed,{UserSkipped,AutoSkipped}}
	      What = tc_start, Data = {Mod,Func}
	      What = tc_done, Data = {Mod,Func,Result}
	      What = tc_user_skip, Data = {Mod,Func,Comment}
	      What = tc_auto_skip, Data = {Mod,Func,Comment}

       error_notification(Mod, Func, Args, Error) -> ok

	      Types  Mod = atom()
		     Test suite name.
		     Func = atom()
		     Name of test case or configuration function.
		     Args = [tuple()]
		     Normally Args = [Config]
		     Error = {Reason,Location}
		     Reason = term()
		     Reason for termination.
		     Location = unknown | [{Mod,Func,Line}]
		     Last known position in Mod before termination.
		     Line = integer()
		     Line number in file Mod.erl.

	      This  function  is called as the result of function Mod:Func failing with Reason at
	      Location. The function is intended mainly to aid specific logging or error handling
	      in  the framework application. Note that for Location to have relevant values (i.e.
	      other than unknown), the line macro or test_server_line  parse  transform  must  be
	      used.  For  details,  please  see  the section about test suite line numbers in the
	      test_server reference manual page.

       warn(What) -> boolean()

	      Types  What = processes | nodes

	      The test server checks the number of processes and nodes before and after the  test
	      is executed. This function is a question to the framework if the test server should
	      warn when the number of processes or nodes has changed during the  test  execution.
	      If true is returned, a warning will be written in the test case minor log file.

       target_info() -> InfoStr

	      Types  InfoStr = string() | ""

	      The test server will ask the framework for information about the test target system
	      and print InfoStr in the test case log file below the host information.

Ericsson AB				test_server 3.4.3		   test_server_ctrl(3erl)


All times are GMT -4. The time now is 10:09 PM.

Unix & Linux Forums Content Copyrightę1993-2018. All Rights Reserved.
×
UNIX.COM Login
Username:
Password:  
Show Password





Not a Forum Member?
Forgot Password?