Weekend Sale Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: xmas50

Oracle 1z0-1127-25 - Oracle Cloud Infrastructure 2025 Generative AI Professional

Page: 1 / 3
Total 88 questions

How does the temperature setting in a decoding algorithm influence the probability distribution over the vocabulary?

A.

Increasing the temperature removes the impact of the most likely word.

B.

Decreasing the temperature broadens the distribution, making less likely words more probable.

C.

Increasing the temperature flattens the distribution, allowing for more varied word choices.

D.

Temperature has no effect on probability distribution; it only changes the speed of decoding.

In the context of generating text with a Large Language Model (LLM), what does the process of greedy decoding entail?

A.

Selecting a random word from the entire vocabulary at each step

B.

Picking a word based on its position in a sentence structure

C.

Choosing the word with the highest probability at each step of decoding

D.

Using a weighted random selection based on a modulated distribution

You create a fine-tuning dedicated AI cluster to customize a foundational model with your custom training data. How many unit hours are required for fine-tuning if the cluster is active for 10 days?

A.

480 unit hours

B.

240 unit hours

C.

744 unit hours

D.

20 unit hours

Which is a key advantage of using T-Few over Vanilla fine-tuning in the OCI Generative AI service?

A.

Reduced model complexity

B.

Enhanced generalization to unseen data

C.

Increased model interpretability

D.

Faster training time and lower cost

What does accuracy measure in the context of fine-tuning results for a generative model?

A.

The number of predictions a model makes, regardless of whether they are correct or incorrect

B.

The proportion of incorrect predictions made by the model during an evaluation

C.

How many predictions the model made correctly out of all the predictions in an evaluation

D.

The depth of the neural network layers used in the model

What is the primary function of the "temperature" parameter in the OCI Generative AI Generation models?

A.

Controls the randomness of the model's output, affecting its creativity

B.

Specifies a string that tells the model to stop generating more content

C.

Assigns a penalty to tokens that have already appeared in the preceding text

D.

Determines the maximum number of tokens the model can generate per response

Which statement is true about the "Top p" parameter of the OCI Generative AI Generation models?

A.

"Top p" selects tokens from the "Top k" tokens sorted by probability.

B.

"Top p" assigns penalties to frequently occurring tokens.

C.

"Top p" limits token selection based on the sum of their probabilities.

D.

"Top p" determines the maximum number of tokens per response.

Which is NOT a category of pretrained foundational models available in the OCI Generative AI service?

A.

Summarization models

B.

Generation models

C.

Translation models

D.

Embedding models

What do embeddings in Large Language Models (LLMs) represent?

A.

The color and size of the font in textual data

B.

The frequency of each word or pixel in the data

C.

The semantic content of data in high-dimensional vectors

D.

The grammatical structure of sentences in the data

How does the temperature setting in a decoding algorithm influence the probability distribution over the vocabulary?

A.

Increasing temperature removes the impact of the most likely word.

B.

Decreasing temperature broadens the distribution, making less likely words more probable.

C.

Increasing temperature flattens the distribution, allowing for more varied word choices.

D.

Temperature has no effect on the probability distribution; it only changes the speed of decoding.