class allms.domain.input_data.InputData
dataclass
@dataclass
class InputData:
input_mappings: Dict[str, str]
id: str
Fields
input_mappings
(Dict[str, str]
): Contains mapping from symbolic variables used in the prompt to the actual data
that will be injected in place of these variables. You have to provide a map for each of symbolic variable used
in the prompt.
id
(str
): Unique identifier. Requests are done in an async mode, so the order of the responses is not the same
as the order of the input data, so this field can be used to identify them.
class allms.domain.response.ResponseData
dataclass
@dataclass
class ResponseData:
response: Union[str, BaseModel]
input_data: Optional[InputData] = None
number_of_prompt_tokens: Optional[int] = None
number_of_generated_tokens: Optional[int] = None
error: Optional[str] = None
Fields
response
(Union[str, BaseModel]
): Contains response of the model. If output_data_model_class
param was provided
to the generate()
method, it'll contain response parsed to the provided class. If output_data_model_class
wasn't
provided, it'll contain raw string returned from the model.
input_data
(Optional[InputData]
): If input_data
was provided to the generate()
method, it'll copy-paste that
data to this field.
number_of_prompt_tokens
(int
): Number of tokens used in the prompt.
number_of_generated_tokens
(str
): Number of tokens generated by the model.
error
(str
): If any error that prevented from completing the generation pipeline fully occurred, it'll be listed
here.