Summer Sale Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: ecus65

Oracle 1z0-1127-25 - Oracle Cloud Infrastructure 2025 Generative AI Professional

Page: 3 / 3
Total 88 questions

An LLM emits intermediate reasoning steps as part of its responses. Which of the following techniques is being utilized?

A.

In-context Learning

B.

Step-Back Prompting

C.

Least-to-Most Prompting

D.

Chain-of-Thought

What is the function of "Prompts" in the chatbot system?

A.

They store the chatbot's linguistic knowledge.

B.

They are used to initiate and guide the chatbot's responses.

C.

They are responsible for the underlying mechanics of the chatbot.

D.

They handle the chatbot's memory and recall abilities.

How does the structure of vector databases differ from traditional relational databases?

A.

A vector database stores data in a linear or tabular format.

B.

It is not optimized for high-dimensional spaces.

C.

It is based on distances and similarities in a vector space.

D.

It uses simple row-based data storage.

What does the Ranker do in a text generation system?

A.

It generates the final text based on the user's query.

B.

It sources information from databases to use in text generation.

C.

It evaluates and prioritizes the information retrieved by the Retriever.

D.

It interacts with the user to understand the query better.

Which statement best describes the role of encoder and decoder models in natural language processing?

A.

Encoder models and decoder models both convert sequences of words into vector representations without generating new text.

B.

Encoder models take a sequence of words and predict the next word in the sequence, whereas decoder models convert a sequence of words into a numerical representation.

C.

Encoder models convert a sequence of words into a vector representation, and decoder models take this vector representation to generate a sequence of words.

D.

Encoder models are used only for numerical calculations, whereas decoder models are used to interpret the calculated numerical values back into text.

Which is a characteristic of T-Few fine-tuning for Large Language Models (LLMs)?

A.

It updates all the weights of the model uniformly.

B.

It does not update any weights but restructures the model architecture.

C.

It selectively updates only a fraction of the model’s weights.

D.

It increases the training time as compared to Vanilla fine-tuning.