admin管理员组

文章数量:1221920

I am trying to create a SwiftUI component where an image can be stretched dynamically from draggable points. The goal is that when a user moves any of the draggable points, the image distorts accordingly, as seen in Photoshop's Free Transform tool.

I attempted to use CIPerspectiveTransform, but it only warps the image, not stretches it from the dragged point. I suspect I might need to use Metal for this, but I’m unsure how to apply the deformation properly.

What I Have Tried:

Used CIPerspectiveTransform, but it does not allow stretching. Created draggable points to control image manipulation. Converted points from the view space to image space.

Expected Behavior:

When a user moves a corner or edge point, the image should stretch from that point instead of just warping the perspective. Smooth real-time rendering.

import SwiftUI
import CoreImage
import CoreImage.CIFilterBuiltins

struct AdjustableImage: View {
    let uiImage: UIImage
    @State private var topLeading: CGPoint = .zero
    @State private var topTrailing: CGPoint = .zero
    @State private var bottomLeading: CGPoint = .zero
    @State private var bottomTrailing: CGPoint = .zero
    @State private var processedImage: UIImage?
    @State private var lastSize: CGSize = .zero

    var body: some View {
        GeometryReader { geometry in
            ZStack {
                if let processedImage = processedImage {
                    Image(uiImage: processedImage)
                        .resizable()
                        .scaledToFit()
                        .frame(width: geometry.size.width, height: geometry.size.height)
                } else {
                    Color.clear
                }

                DraggablePoint(position: $topLeading, geometry: geometry)
                DraggablePoint(position: $topTrailing, geometry: geometry)
                DraggablePoint(position: $bottomLeading, geometry: geometry)
                DraggablePoint(position: $bottomTrailing, geometry: geometry)
            }
            .onAppear {
                updatePoints(for: geometry.size)
                processImage(size: geometry.size)
            }
            .onChange(of: topLeading) { _ in processImage(size: geometry.size) }
            .onChange(of: topTrailing) { _ in processImage(size: geometry.size) }
            .onChange(of: bottomLeading) { _ in processImage(size: geometry.size) }
            .onChange(of: bottomTrailing) { _ in processImage(size: geometry.size) }
        }
    }

    private func updatePoints(for size: CGSize) {
        guard size != lastSize else { return }
        lastSize = size
        
        topLeading = .zero
        topTrailing = CGPoint(x: size.width, y: 0)
        bottomLeading = CGPoint(x: 0, y: size.height)
        bottomTrailing = CGPoint(x: size.width, y: size.height)
    }

    private func processImage(size: CGSize) {
        guard let inputImage = CIImage(image: uiImage) else { return }
        
        let imageSize = uiImage.size
        let scaleX = imageSize.width / size.width
        let scaleY = imageSize.height / size.height

        let transformedPoints = [
            convertPoint(topLeading, scaleX: scaleX, scaleY: scaleY, viewHeight: size.height),
            convertPoint(topTrailing, scaleX: scaleX, scaleY: scaleY, viewHeight: size.height),
            convertPoint(bottomLeading, scaleX: scaleX, scaleY: scaleY, viewHeight: size.height),
            convertPoint(bottomTrailing, scaleX: scaleX, scaleY: scaleY, viewHeight: size.height)
        ]

        guard let filter = CIFilter(name: "CIPerspectiveTransform") else { return }
        filter.setValue(inputImage, forKey: kCIInputImageKey)
        filter.setValue(transformedPoints[0], forKey: "inputTopLeft")
        filter.setValue(transformedPoints[1], forKey: "inputTopRight")
        filter.setValue(transformedPoints[2], forKey: "inputBottomLeft")
        filter.setValue(transformedPoints[3], forKey: "inputBottomRight")

        guard let outputImage = filter.outputImage else { return }
        let context = CIContext()
        guard let cgImage = context.createCGImage(outputImage, from: outputImage.extent) else { return }
        processedImage = UIImage(cgImage: cgImage)
    }

    private func convertPoint(_ point: CGPoint, scaleX: CGFloat, scaleY: CGFloat, viewHeight: CGFloat) -> CIVector {
        let x = point.x * scaleX
        let y = (viewHeight - point.y) * scaleY
        return CIVector(x: x, y: y)
    }
}

struct DraggablePoint: View {
    @Binding var position: CGPoint
    var geometry: GeometryProxy

    var body: some View {
        Circle()
            .fill(Color.blue)
            .frame(width: 20, height: 20)
            .position(position)
            .gesture(
                DragGesture()
                    .onChanged { value in
                        var newLocation = value.location
                        newLocation.x = max(0, min(newLocation.x, geometry.size.width))
                        newLocation.y = max(0, min(newLocation.y, geometry.size.height))
                        position = newLocation
                    }
            )
    }
}

struct SimpleDemo: View {
    var body: some View {
        if let image = UIImage(named: "imgMusic") {
            AdjustableImage(uiImage: image)
                .frame(width: 300, height: 300)
                .border(Color.gray, width: 1)
        } else {
            Text("Image not found")
        }
    }
}

#Preview {
    SimpleDemo()
}

My Questions:

How can I stretch the image from the dragged points instead of just warping it? Should I use Metal shaders for real-time deformation? If so, how do I map user interaction to vertex manipulation? Is there a Core Image or SwiftUI-based approach that supports non-linear stretching?

this is what I get now and what it's looks like

this is what I want I want to edit image like this moving points

本文标签: swiftHow to Stretch an Image from Draggable Points Using Metal or Core ImageStack Overflow