Winter Sale Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: ecus65

iSQI CTAL-TAE - ISTQB Certified Tester Advanced Level, Test Automation Engineering

Page: 2 / 3
Total 80 questions

You have agreed with your organization's managers to conduct a pilot project to introduce test automation. Managers' expectations about the benefits of automation are too optimistic. Which of the following is LEAST relevant when deciding the scope of the pilot project's objectives?

A.

Evaluate the suitability of different test automation tools based on the technology stack used by the applications for which the automated tests will be developed

B.

Evaluate the potential cost savings and benefits (e.g., faster test execution, better test coverage) of using automated testing versus manual testing

C.

Evaluate the knowledge and skills of people who will be involved in automating test cases for applicable test automation frameworks and technologies

D.

Evaluate the performance of an organization's network infrastructure in terms of factors such as availability, bandwidth, latency, packet loss, and jitter

Which one of the following answers does NOT refer to an example of configuration item(s) that should be specified in development pipelines to identify a test environment (and its specific test data) associated with a web app under test on which to execute automated tests?

A.

The number and type of automated tests to execute in the test environment where the web app is deployed

B.

The base URL of the test environment where the web app is deployed (i.e., the root address for accessing the web app)

C.

The connection string(s) to connect to the test database(s) within the test environment where the web app is deployed

D.

The URLs of web APIs/web services related to the web app’s backend within the test environment where the app is deployed

An automated test script makes a well-formed request to a REST API in the backend of a web app to add a single item for a product (with ID = 710) to the cart and expects a response confirming that the product is successfully added. The status line of the API response is HTTP/1.1 200 OK, while the response body indicates that the product is out of stock. The API response is correct, the test script fails but completes, and the message to log is: The product with ID = 710 is out of stock. Cart not updated. When this occurs, you are already aware that both the failed test and the API are behaving correctly and that the problem is in the test data. The TAS supports the following test logging levels: FATAL, ERROR, WARN, INFO, DEBUG. Which of the following is the MOST appropriate test logging level to use to log the specified message?

A.

FATAL

B.

INFO

C.

DEBUG

D.

WARN

Consider a TAS that exclusively uses the APIs of a SUT. To make this work, significant changes have been required to the SUT by adding a set of dedicated test interfaces to the APIs. All the automated tests will use these test interfaces when interacting with the SUT. Assume that you are currently verifying the correctness of the automated test environment and test tool setup.

Which of the following would you expect to be the MOST specific risk associated with this scenario?

A.

The connectivity from the TAS to the dedicated test interfaces will not work

B.

The process of configuring the TAS will be error-phone due to manual intervention

C.

The automated test cases will not contain the expected result

D.

False alarms, that are unlikely to occur in the real world, will be observed during testing

Which of the following statements BEST describe aspects of the SUT to consider when designing a TAA?

A.

All the interaction between SUT and TAS should be logged with the highest level of detail

B.

All the internal test interfaces of the SUT should be removed prior to the product release

C.

All the interface of the SUT affected by the tests should be controllable by the TAA

D.

All the external test interfaces of the SUT should be removed prior to the product release

A suite of automated test cases was run multiple times on the same release of the SUT in the same test environment. Consider analyzing a test histogram that shows the distribution of test results (pass, fail, etc.) for each test case across these runs. Which of the following potential issues is MOST likely to be identified as a result of such an analysis?

A.

Outliers in test execution times

B.

Security vulnerabilities in automated test cases

C.

Unstable automated test cases

D.

Maintainability issues in automated test cases

(Which of the following aspects of “design for testability” is MOST directly associated with the need to define precisely which interfaces are available in the SUT for test automation at different test levels?)

A.

Autonomy

B.

Architecture transparency

C.

Controllability

D.

Observability

You are executing the first test run of a test automation suite of 200 tests. All the relevant information related to the state of the SUT and to the automated test execution is stored in a small database. During the Automated test run you observe that the first 10 test pass, while an abnormal termination occurs when executing the 11thtest. This test does not complete its execution and the overall execution of the suite is aborted. An immediate analysis of the abnormal termination is expected to be time consuming and you have been asked to produce a detailed report of the execution results for the first test run, as soon as possible.

What is the MOST important FIRST step to be taken immediately after the abnormal occurred when executing the 11thtest?

A.

Re-run the test automation suite starting from the 12thtest

B.

Return the database to a consistent state that allows subsequent test to run

C.

Take a backup of the database in its current state. So It can be analyzed later

D.

Re-run the test automation suite starting from the 1sttest.

Designing the System Under Test (SUT) for testability is important for a good test automation approach and can also benefit manual test execution.

Which of the following is NOT a consideration when designing for testability?

A.

Observability: The SUT needs to provide interface that give insight into the system.

B.

Re-useability: The code written for the SUT must be re-useable for other similar system.

C.

Clearly defined architecture: The SUT Architecture needs to provide clear and understandable interfaces giving control and visibility on all test levels.

D.

Control: the SUT needs to provide interfaces that can be used to perform actions on SUT.

A TAS is used to run on a test environment a suite of automated regression tests, written at the UI level, on different releases of a web app: all executions complete successfully, always providing correct results (i.e., producing neither false positives nor false negatives). The tests, all independent of each other, consist of executable test scripts based on the flow model pattern which has been implemented in a three-layer TAF (test scripts, business logic, core libraries) by expanding the page object model via the façade pattern. Currently the suite takes too long to run, and the test scripts are considered too long in terms of LOC (Lines of Code). Which of the following recommendations would you provide for improving the TAS (assuming it is possible to perform all of them)?

A.

Modify the TAF so that test scripts are based on the page object model, rather than the flow model pattern

B.

Implement a mechanism to automatically reboot the entire web app in the event of a crash

C.

Split the suite into sub-suites and run each of them concurrently on different test environments

D.

Modify the architecture of the SUT to improve its testability and, if necessary, the TAA accordingly