Skip to content
代码片段 群组 项目
提交 88489469 编辑于 作者: Marc Shaw's avatar Marc Shaw
浏览文件

Finish removing fill in MR template feature

MR: gitlab.com/gitlab-org/gitlab/-/merge_requests/159194
上级 e2ad8a70
No related branches found
No related tags found
无相关合并请求
显示
0 个添加396 个删除
...@@ -439,11 +439,9 @@ RSpec/NamedSubject: ...@@ -439,11 +439,9 @@ RSpec/NamedSubject:
- 'ee/spec/lib/gitlab/llm/concerns/exponential_backoff_spec.rb' - 'ee/spec/lib/gitlab/llm/concerns/exponential_backoff_spec.rb'
- 'ee/spec/lib/gitlab/llm/templates/categorize_question_spec.rb' - 'ee/spec/lib/gitlab/llm/templates/categorize_question_spec.rb'
- 'ee/spec/lib/gitlab/llm/templates/explain_vulnerability_spec.rb' - 'ee/spec/lib/gitlab/llm/templates/explain_vulnerability_spec.rb'
- 'ee/spec/lib/gitlab/llm/templates/fill_in_merge_request_template_spec.rb'
- 'ee/spec/lib/gitlab/llm/templates/generate_commit_message_spec.rb' - 'ee/spec/lib/gitlab/llm/templates/generate_commit_message_spec.rb'
- 'ee/spec/lib/gitlab/llm/templates/summarize_review_spec.rb' - 'ee/spec/lib/gitlab/llm/templates/summarize_review_spec.rb'
- 'ee/spec/lib/gitlab/llm/vertex_ai/completions/analyze_ci_job_failure_spec.rb' - 'ee/spec/lib/gitlab/llm/vertex_ai/completions/analyze_ci_job_failure_spec.rb'
- 'ee/spec/lib/gitlab/llm/vertex_ai/completions/fill_in_merge_request_template_spec.rb'
- 'ee/spec/lib/gitlab/llm/vertex_ai/completions/generate_commit_message_spec.rb' - 'ee/spec/lib/gitlab/llm/vertex_ai/completions/generate_commit_message_spec.rb'
- 'ee/spec/lib/gitlab/llm/vertex_ai/completions/summarize_review_spec.rb' - 'ee/spec/lib/gitlab/llm/vertex_ai/completions/summarize_review_spec.rb'
- 'ee/spec/lib/gitlab/llm/vertex_ai/model_configurations/chat_spec.rb' - 'ee/spec/lib/gitlab/llm/vertex_ai/model_configurations/chat_spec.rb'
......
...@@ -1551,7 +1551,6 @@ Input type: `AiActionInput` ...@@ -1551,7 +1551,6 @@ Input type: `AiActionInput`
| <a id="mutationaiactionclientsubscriptionid"></a>`clientSubscriptionId` | [`String`](#string) | Client generated ID that can be subscribed to, to receive a response for the mutation. | | <a id="mutationaiactionclientsubscriptionid"></a>`clientSubscriptionId` | [`String`](#string) | Client generated ID that can be subscribed to, to receive a response for the mutation. |
| <a id="mutationaiactionexplaincode"></a>`explainCode` | [`AiExplainCodeInput`](#aiexplaincodeinput) | Input for explain_code AI action. | | <a id="mutationaiactionexplaincode"></a>`explainCode` | [`AiExplainCodeInput`](#aiexplaincodeinput) | Input for explain_code AI action. |
| <a id="mutationaiactionexplainvulnerability"></a>`explainVulnerability` | [`AiExplainVulnerabilityInput`](#aiexplainvulnerabilityinput) | Input for explain_vulnerability AI action. | | <a id="mutationaiactionexplainvulnerability"></a>`explainVulnerability` | [`AiExplainVulnerabilityInput`](#aiexplainvulnerabilityinput) | Input for explain_vulnerability AI action. |
| <a id="mutationaiactionfillinmergerequesttemplate"></a>`fillInMergeRequestTemplate` | [`AiFillInMergeRequestTemplateInput`](#aifillinmergerequesttemplateinput) | Input for fill_in_merge_request_template AI action. |
| <a id="mutationaiactiongeneratecommitmessage"></a>`generateCommitMessage` | [`AiGenerateCommitMessageInput`](#aigeneratecommitmessageinput) | Input for generate_commit_message AI action. | | <a id="mutationaiactiongeneratecommitmessage"></a>`generateCommitMessage` | [`AiGenerateCommitMessageInput`](#aigeneratecommitmessageinput) | Input for generate_commit_message AI action. |
| <a id="mutationaiactiongeneratecubequery"></a>`generateCubeQuery` | [`AiGenerateCubeQueryInput`](#aigeneratecubequeryinput) | Input for generate_cube_query AI action. | | <a id="mutationaiactiongeneratecubequery"></a>`generateCubeQuery` | [`AiGenerateCubeQueryInput`](#aigeneratecubequeryinput) | Input for generate_cube_query AI action. |
| <a id="mutationaiactiongeneratedescription"></a>`generateDescription` | [`AiGenerateDescriptionInput`](#aigeneratedescriptioninput) | Input for generate_description AI action. | | <a id="mutationaiactiongeneratedescription"></a>`generateDescription` | [`AiGenerateDescriptionInput`](#aigeneratedescriptioninput) | Input for generate_description AI action. |
...@@ -38877,19 +38876,6 @@ see the associated mutation type above. ...@@ -38877,19 +38876,6 @@ see the associated mutation type above.
| <a id="aiexplainvulnerabilityinputincludesourcecode"></a>`includeSourceCode` | [`Boolean`](#boolean) | Include vulnerablility source code in the AI prompt. | | <a id="aiexplainvulnerabilityinputincludesourcecode"></a>`includeSourceCode` | [`Boolean`](#boolean) | Include vulnerablility source code in the AI prompt. |
| <a id="aiexplainvulnerabilityinputresourceid"></a>`resourceId` | [`AiModelID!`](#aimodelid) | Global ID of the resource to mutate. | | <a id="aiexplainvulnerabilityinputresourceid"></a>`resourceId` | [`AiModelID!`](#aimodelid) | Global ID of the resource to mutate. |
   
### `AiFillInMergeRequestTemplateInput`
#### Arguments
| Name | Type | Description |
| ---- | ---- | ----------- |
| <a id="aifillinmergerequesttemplateinputcontent"></a>`content` | [`String!`](#string) | Template content to fill in. |
| <a id="aifillinmergerequesttemplateinputresourceid"></a>`resourceId` | [`AiModelID!`](#aimodelid) | Global ID of the resource to mutate. |
| <a id="aifillinmergerequesttemplateinputsourcebranch"></a>`sourceBranch` | [`String!`](#string) | Source branch of the changes. |
| <a id="aifillinmergerequesttemplateinputsourceprojectid"></a>`sourceProjectId` | [`ID`](#id) | ID of the project where the changes are from. |
| <a id="aifillinmergerequesttemplateinputtargetbranch"></a>`targetBranch` | [`String!`](#string) | Target branch of where the changes will be merged into. |
| <a id="aifillinmergerequesttemplateinputtitle"></a>`title` | [`String!`](#string) | Title of the merge request to be created. |
### `AiGenerateCommitMessageInput` ### `AiGenerateCommitMessageInput`
   
#### Arguments #### Arguments
# frozen_string_literal: true
module Types
module Ai
class FillInMergeRequestTemplateInputType < BaseMethodInputType
graphql_name 'AiFillInMergeRequestTemplateInput'
argument :title, ::GraphQL::Types::String,
required: true,
description: 'Title of the merge request to be created.'
argument :source_project_id, ::GraphQL::Types::ID,
required: false,
description: 'ID of the project where the changes are from.'
argument :source_branch, ::GraphQL::Types::String,
required: true,
description: 'Source branch of the changes.'
argument :target_branch, ::GraphQL::Types::String,
required: true,
description: 'Target branch of where the changes will be merged into.'
argument :content, ::GraphQL::Types::String,
required: true,
description: 'Template content to fill in.'
end
end
end
# frozen_string_literal: true
# rubocop:disable Gitlab/BoundedContexts -- TODO file to be removed
module Llm
class FillInMergeRequestTemplateService < BaseService
extend ::Gitlab::Utils::Override
override :valid
def valid?
false
end
private
def ai_action
:fill_in_merge_request_template
end
def perform
schedule_completion_worker
end
end
end
# rubocop:enable Gitlab/BoundedContexts
# frozen_string_literal: true
module Gitlab
module Llm
module Templates
class FillInMergeRequestTemplate
include Gitlab::Utils::StrongMemoize
def initialize(user, project, params = {})
@user = user
@project = project
@params = params
end
def to_prompt
<<~PROMPT
You are an AI code assistant that can understand DIFF in Git diff format, TEMPLATE in a Markdown format and can produce Markdown as a result.
You will be given TITLE, DIFF, and TEMPLATE. Do the following:
1. Create a merge request description from the given TEMPLATE.
2. Given the TITLE and DIFF, explain the diff in detail and add it to the section in the description for explaining the DIFF.
3. For sections with <!-- AI Skip --> placeholder in the description, copy the content from TEMPLATE.
4. Return the merge request description.
TITLE: #{params[:title]}
DIFF:
#{extracted_diff}
TEMPLATE:
#{content}
PROMPT
end
private
attr_reader :user, :project, :params
def extracted_diff
compare = CompareService
.new(source_project, params[:source_branch])
.execute(project, params[:target_branch])
return unless compare
# Extract only the diff strings and discard everything else
compare.raw_diffs.to_a.map do |raw_diff|
# Each diff string starts with information about the lines changed,
# bracketed by @@. Removing this saves us tokens.
#
# Ex: @@ -0,0 +1,58 @@\n+# frozen_string_literal: true\n+\n+module MergeRequests\n+
raw_diff.diff.sub(Gitlab::Regex.git_diff_prefix, "")
end.join.truncate_words(2000)
end
def content
# We truncate words of the template content to 600 words so we can
# ensure that it fits the maxOutputTokens of Vertex AI which is set to
# 1024 in the client.
params[:content]&.truncate_words(600)
end
def source_project
return project unless params[:source_project_id]
source_project = Project.find_by_id(params[:source_project_id])
return source_project if source_project.present? && user.can?(:create_merge_request_from, source_project)
project
end
end
end
end
end
...@@ -86,15 +86,6 @@ class AiFeaturesCatalogue ...@@ -86,15 +86,6 @@ class AiFeaturesCatalogue
self_managed: true, self_managed: true,
internal: false internal: false
}, },
fill_in_merge_request_template: {
service_class: ::Gitlab::Llm::VertexAi::Completions::FillInMergeRequestTemplate,
prompt_class: ::Gitlab::Llm::Templates::FillInMergeRequestTemplate,
feature_category: :code_review_workflow,
execute_method: ::Llm::FillInMergeRequestTemplateService,
maturity: :experimental,
self_managed: false,
internal: false
},
summarize_new_merge_request: { summarize_new_merge_request: {
service_class: ::Gitlab::Llm::VertexAi::Completions::SummarizeNewMergeRequest, service_class: ::Gitlab::Llm::VertexAi::Completions::SummarizeNewMergeRequest,
prompt_class: ::Gitlab::Llm::Templates::SummarizeNewMergeRequest, prompt_class: ::Gitlab::Llm::Templates::SummarizeNewMergeRequest,
......
# frozen_string_literal: true
module Gitlab
module Llm
module VertexAi
module Completions
class FillInMergeRequestTemplate < Gitlab::Llm::Completions::Base
def execute
response = response_for(user, project, options)
response_modifier = ::Gitlab::Llm::VertexAi::ResponseModifiers::Predictions.new(response)
::Gitlab::Llm::GraphqlSubscriptionResponseService.new(
user, project, response_modifier, options: response_options
).execute
response_modifier
end
private
def project
resource
end
def response_for(user, project, options)
template = ai_prompt_class.new(user, project, options)
request(user, template)
end
def request(user, template)
::Gitlab::Llm::VertexAi::Client
.new(user, unit_primitive: 'fill_in_merge_request_template', tracking_context: tracking_context)
.text(content: template.to_prompt)
end
end
end
end
end
end
...@@ -57,10 +57,6 @@ ...@@ -57,10 +57,6 @@
ai_action { :resolve_vulnerability } ai_action { :resolve_vulnerability }
end end
trait :fill_in_merge_request_template do
ai_action { :fill_in_merge_request_template }
end
trait :summarize_new_merge_request do trait :summarize_new_merge_request do
ai_action { :summarize_new_merge_request } ai_action { :summarize_new_merge_request }
end end
......
# frozen_string_literal: true
require 'spec_helper'
RSpec.describe Gitlab::Llm::Templates::FillInMergeRequestTemplate, feature_category: :code_review_workflow do
let_it_be(:project) { create(:project, :repository) }
let_it_be(:user) { project.owner }
let(:source_project) { project }
let(:params) do
{
source_project_id: source_project.id,
source_branch: 'feature',
target_branch: 'master',
title: 'A merge request',
content: 'This is content'
}
end
subject { described_class.new(user, project, params) }
describe '#to_prompt' do
it 'includes title param' do
expect(subject.to_prompt).to include(params[:title])
end
it 'includes raw diff' do
expect(subject.to_prompt)
.to include("+class Feature\n+ def foo\n+ puts 'bar'\n+ end\n+end")
end
it 'includes the content' do
expect(subject.to_prompt).to include('This is content')
end
context 'when user cannot create merge request from source_project_id' do
let(:source_project) { create(:project) }
it 'includes diff comparison from project' do
expect(subject.to_prompt)
.to include("+class Feature\n+ def foo\n+ puts 'bar'\n+ end\n+end")
end
end
context 'when no source_project_id is specified' do
let(:params) do
{
source_branch: 'feature',
target_branch: 'master',
title: 'A merge request',
content: 'This is content'
}
end
it 'includes diff comparison from project' do
expect(subject.to_prompt)
.to include("+class Feature\n+ def foo\n+ puts 'bar'\n+ end\n+end")
end
end
end
end
# frozen_string_literal: true
require 'spec_helper'
RSpec.describe Gitlab::Llm::VertexAi::Completions::FillInMergeRequestTemplate, feature_category: :code_review_workflow do
let(:prompt_class) { Gitlab::Llm::Templates::FillInMergeRequestTemplate }
let(:options) { {} }
let(:response_modifier) { double }
let(:response_service) { double }
let_it_be(:user) { create(:user) }
let_it_be(:project) { create(:project) }
let(:params) do
[user, project, response_modifier, { options: { ai_action: :fill_in_merge_request_template, request_id: 'uuid' } }]
end
let(:prompt_message) do
build(:ai_message, :fill_in_merge_request_template, user: user, resource: project, request_id: 'uuid')
end
subject { described_class.new(prompt_message, prompt_class, options) }
describe '#execute' do
context 'when the text client returns a successful response' do
let(:example_answer) { "AI filled in template" }
let(:example_response) do
{
"predictions" => [
{
"content" => example_answer,
"safetyAttributes" => {
"categories" => ["Violent"],
"scores" => [0.4000000059604645],
"blocked" => false
}
}
]
}
end
before do
allow_next_instance_of(Gitlab::Llm::VertexAi::Client) do |client|
allow(client).to receive(:text).and_return(example_response.to_json)
end
end
it 'publishes the content from the AI response' do
expect(::Gitlab::Llm::VertexAi::ResponseModifiers::Predictions)
.to receive(:new)
.with(example_response.to_json)
.and_return(response_modifier)
expect(::Gitlab::Llm::GraphqlSubscriptionResponseService)
.to receive(:new)
.with(*params)
.and_return(response_service)
expect(response_service).to receive(:execute)
subject.execute
end
end
context 'when the text client returns an unsuccessful response' do
let(:error) { { error: 'Error' } }
before do
allow_next_instance_of(Gitlab::Llm::VertexAi::Client) do |client|
allow(client).to receive(:text).and_return(error.to_json)
end
end
it 'publishes the error to the graphql subscription' do
expect(::Gitlab::Llm::VertexAi::ResponseModifiers::Predictions)
.to receive(:new)
.with(error.to_json)
.and_return(response_modifier)
expect(::Gitlab::Llm::GraphqlSubscriptionResponseService)
.to receive(:new)
.with(*params)
.and_return(response_service)
expect(response_service).to receive(:execute)
subject.execute
end
end
end
end
# frozen_string_literal: true
require 'spec_helper'
RSpec.describe Llm::FillInMergeRequestTemplateService, :saas, feature_category: :code_review_workflow do
let_it_be(:user) { create(:user) }
let_it_be_with_reload(:group) { create(:group_with_plan, plan: :ultimate_plan) }
let_it_be(:resource) { create(:project, :public, group: group) }
let(:current_user) { user }
describe '#perform' do
include_context 'with ai features enabled for group'
before_all do
group.add_guest(user)
end
subject { described_class.new(current_user, resource, {}).execute }
it { is_expected.to be_error.and have_attributes(message: eq(described_class::INVALID_MESSAGE)) }
context 'when user is not member of project group' do
let(:current_user) { create(:user) }
it { is_expected.to be_error.and have_attributes(message: eq(described_class::INVALID_MESSAGE)) }
end
context 'when general feature flag is disabled' do
before do
stub_feature_flags(ai_global_switch: false)
end
it { is_expected.to be_error.and have_attributes(message: eq(described_class::INVALID_MESSAGE)) }
end
context 'when resource is not a project' do
let(:resource) { create(:epic, group: group) }
it { is_expected.to be_error.and have_attributes(message: eq(described_class::INVALID_MESSAGE)) }
end
context 'when user has no ability to fill_in_merge_request_template' do
let(:fill_in_merge_request_template_enabled) { false }
it { is_expected.to be_error.and have_attributes(message: eq(described_class::INVALID_MESSAGE)) }
end
end
end
0% 加载中 .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册