admin管理员组

文章数量:1201975

Could anyone help me with some advice on how to create a Natural Language Inference pipeline in haystack

I want to use the Haystack framework to create a pipeline for Natural Language Inference on the response from a Retrieval-Augmented Generation (RAG) application

Because I'm using haystack-ai , I cannot use farm-haystack. If I could use farm-haystack (v1.0) I believe I could do something like below:

from haystack import Pipeline
from haystack_ai.nodes import HuggingFaceTextClassifier

classifier = HuggingFaceTextClassifier(
    model_name_or_path=entailment_model,
    task="text-classification",  # Task type: text classification
    labels=[
        "entailment",
        "contradiction",
        "neutral",
    ],  # Define the labels your model is trained on
)

classifier_pipeline = Pipeline()
classifier_pipeline.add_cmponent("classifier_llm", classifier)
premise = "The sun rises in the east and sets in the west."
hypothesis = "The sun rises in the east."

classifier_pipeline.run({"classifier_llm": {"text": premise, "text_pair": hypothesis}})

However I cannot see how to achieve the same in haystack v2.0 (haystack-ai) .

Any comments or pointers welcome.

Could anyone help me with some advice on how to create a Natural Language Inference pipeline in haystack

I want to use the Haystack framework to create a pipeline for Natural Language Inference on the response from a Retrieval-Augmented Generation (RAG) application

Because I'm using haystack-ai , I cannot use farm-haystack. If I could use farm-haystack (v1.0) I believe I could do something like below:

from haystack import Pipeline
from haystack_ai.nodes import HuggingFaceTextClassifier

classifier = HuggingFaceTextClassifier(
    model_name_or_path=entailment_model,
    task="text-classification",  # Task type: text classification
    labels=[
        "entailment",
        "contradiction",
        "neutral",
    ],  # Define the labels your model is trained on
)

classifier_pipeline = Pipeline()
classifier_pipeline.add_cmponent("classifier_llm", classifier)
premise = "The sun rises in the east and sets in the west."
hypothesis = "The sun rises in the east."

classifier_pipeline.run({"classifier_llm": {"text": premise, "text_pair": hypothesis}})

However I cannot see how to achieve the same in haystack v2.0 (haystack-ai) .

Any comments or pointers welcome.

Share Improve this question asked Jan 22 at 12:17 dorrizdorriz 2,6793 gold badges13 silver badges19 bronze badges
Add a comment  | 

1 Answer 1

Reset to default 1

I am one of the maintainers of Haystack and the author of the Entailment Checker node (Haystack 1.x).

After investigation, I found that Haystack 2.x zero-shot classification components (TransformersZeroShotDocumentClassifier and TransformersZeroShotTextRouter) do not natively support your use case. This is actually the same reason why I developed the custom EC node for Haystack 1.x.

Suggestions:

  • you can look at the code of the Entailment Checker node and adapt it for Haystack 2.x
  • you can open an issue on Haystack to request this feature. If there's sufficient interest, we can work on developing it.

本文标签: nlphow to create a Natural Language Inference pipeline in haystackStack Overflow