内容简介:SwiftUI is the perfect addition to an iOS or Mac developer’s toolbox, and it will only improve as the SDK matures. SwiftUI’s solid native support is ideal for vertical lists and items arranged in a rectangular grid.But what about UIs that aren’t so square?
SwiftUI is the perfect addition to an iOS or Mac developer’s toolbox, and it will only improve as the SDK matures. SwiftUI’s solid native support is ideal for vertical lists and items arranged in a rectangular grid.
But what about UIs that aren’t so square?
Think about apps like Sketch or OmniGraffle. These allow you to arrange items at arbitrary points on the screen and draw connections between them.
In this tutorial, you’ll learn how to create this type of mind-map spatial UI using SwiftUI. You’ll create a simple mind-mapping app that allows you to place text boxes on the screen, move them around and create connections between them.
Make sure you have Xcode 11.3 or higher installed before you continue.
Note : This tutorial assumes you have a basic understanding of SwiftUI syntax and intermediate-level Swift skills. If this is your first trip into SwiftUI, check out our basic SwiftUI tutorial first.
Getting Started
Download the tutorial materials by clicking the Download Materials button at the top or bottom of the tutorial. To keep you focused on the SwiftUI elements of this project, you’ll start with some existing model code that describes the graph. You’ll learn more about graph theory in the next section.
Open the RazeMind project in the starter folder. In the Project navigator, locate and expand the folder called Model . You’ll see four Swift files that provide a data source for the graph that you’ll render:
- Mesh.swift : The mesh is the top-level container for the model. A mesh has a set of nodes and a set of edges. There’s some logic associated with manipulating the mesh’s data. You’ll use that logic later in the tutorial.
- Node.swift : A node describes one object in the mesh, the position of the node and the text contained by the node.
- Edge.swift : An edge describes a connection between two nodes and includes references to them.
- SelectionHandler.swift : This helper acts as a persistent memory of the selection state of the view. There is some logic associated with selection and editing of nodes that you’ll use later.
Feel free to browse the starter project code. In a nutshell, the starter code provides managed access to some sets of objects. You don’t need to understand it all right now.
Understanding Graph Theory
Graphs are mathematical structures that model pair-wise relationships between nodes in the graph. A connection between two nodes is an edge.
Graphs are either directed or undirected. A directed graph symbolizes orientation between the two end nodes A
and B
of an edge, e.g A -> B != B -> A
. An undirected graph doesn’t give any significance to the orientation of the end points, so A -> B == B -> A
.
A graph is a web of connections. A node can reference anything you choose.
In the sample project, your node is a container for a single string, but you can think as big as you want.
Imagine you’re an architect planning a building. You’ll take components from a palette and generate a bill of materials with that information.
Designing Your UI
For this tutorial, you’ll build an infinite 2D surface. You’ll be able to pan the surface and zoom in and out to see more or less content.
When you create your own app, you need to decide how you want your interface to operate. You can do almost anything, but remember to consider common-use patterns and accessibility. If your interface is too strange or complex, your users will find it hard to work with.
Here’s the set of rules you’ll implement:
- Change the position of a node by dragging it.
- Select a node by tapping it.
- Pan and zoom on the screen because it acts like an infinite surface.
- Pan the surface by dragging the surface.
- Use a pinch gesture to zoom in and out.
Now, it’s time to implement these features. You’ll start by building some simple views.
Building the View Primitives
You want to display two things on the surface: nodes and edges. The first thing to do is to create SwiftUI views for these two types.
Creating a Node View
Start by creating a new file. In the Project navigator, select the View Stack folder and then add a new file by pressing Command-N . Select iOS ▸ Swift UI View and click Next . Name the file NodeView.swift and check that you’ve selected the target RazeMind . Finally, click Create .
Inside NodeView
, add these variables:
static let width = CGFloat(100) // 1 @State var node: Node //2 @ObservedObject var selection: SelectionHandler //3 var isSelected: Bool { return selection.isNodeSelected(node) }
- You pass the node you want to display.
-
@ObservedObject
tells you thatselection
is passed toNodeView
by reference , as it has a requirement ofAnyObject
. -
The computed property
isSelected
keeps things tidy inside the body of the view.
Now, find the NodeView_Previews
implementation and replace the body of the previews
property with:
let selection1 = SelectionHandler() let node1 = Node(text: "hello world") let selection2 = SelectionHandler() let node2 = Node(text: "I'm selected, look at me") selection2.selectNode(node2) return VStack { NodeView(node: node1, selection: selection1) NodeView(node: node2, selection: selection2) }
Here, you instantiate two nodes using two different instances of SelectionHandler
. This provides you with a preview of how the view looks when you select it.
Go back to NodeView
and replace the body
property with the following implementation:
Ellipse() .fill(Color.green) .overlay(Ellipse() .stroke(isSelected ? Color.red : Color.black, lineWidth: isSelected ? 5 : 3)) .overlay(Text(node.text) .multilineTextAlignment(.center) .padding(EdgeInsets(top: 0, leading: 8, bottom: 0, trailing: 8))) .frame(width: NodeView.width, height: NodeView.width, alignment: .center)
This results in a green ellipse with a border and text inside. Nothing fancy, but it’s fine to start working with.
Select simulator iPhone 11 Pro in the target selector. This choice controls how the SwiftUI canvas displays your preview.
Open the SwiftUI canvas using Adjust Editor Options ▸ Canvas at the top-right of the editor or by pressing Option-Command-Return .
In the preview frame, you’ll see the two possible versions of NodeView
.
Creating an Edge Shape
Now, you’re ready for your next task, which is to create the view of an edge. An edge is a line that connects two nodes.
In the Project navigator, select View Stack . Then, create a new SwiftUI View file. Name the file EdgeView.swift .
Xcode has created a Template view called EdgeView
, but you want EdgeView
to be a Shape
. So, replace the declaration for the type:
struct EdgeView: View {
With:
struct EdgeView: Shape {
Delete the template’s body
. Now, you have a struct
with no code inside.
To define the shape, add this code inside EdgeView
.
var startx: CGFloat = 0 var starty: CGFloat = 0 var endx: CGFloat = 0 var endy: CGFloat = 0 // 1 init(edge: EdgeProxy) { // 2 startx = edge.start.x starty = edge.start.y endx = edge.end.x endy = edge.end.y } // 3 func path(in rect: CGRect) -> Path { var linkPath = Path() linkPath.move(to: CGPoint(x: startx, y: starty) .alignCenterInParent(rect.size)) linkPath.addLine(to: CGPoint(x: endx, y:endy) .alignCenterInParent(rect.size)) return linkPath }
Looking at the code for EdgeView
:
-
You initialize the shape with an instance of
EdgeProxy
, notEdge
, because anEdge
doesn’t know anything about theNode
instances it references. TheMesh
rebuilds the list ofEdgeProxy
objects when the model changes. -
You split the two end
CGPoints
into fourCGFloat
properties. This becomes important later in the tutorial, when you add animation. -
The drawing in
path(in:)
is a simple straight line fromstart
toend
. The call to the helperalignCenterInParent(_:)
shifts the origin of the line from the top leading edge to the center of the view rectangle.
Locate EdgeView_Previews
below EdgeView
, and replace the default implementation of previews
with this code.
let edge1 = EdgeProxy( id: UUID(), start: CGPoint(x: -100, y: -100), end: CGPoint(x: 100, y: 100)) let edge2 = EdgeProxy( id: UUID(), start: CGPoint(x: 100, y: -100), end: CGPoint(x: -100, y: 100)) return ZStack { EdgeView(edge: edge1).stroke(lineWidth: 4) EdgeView(edge: edge2).stroke(Color.blue, lineWidth: 2) }
Refresh the preview. You’ll see an X centered in the simulator window.
You’re now ready to start creating your mesh view.
Making a Map View
In this section, you’ll combine your NodeView
and EdgeView
to show a visual description of your Mesh
.
Creating the Nodes’ Layer
For your first task, you’ll build the layer that draws the nodes. You could smoosh the nodes and the edges into one view, but building the layer is tidier and gives better modularity and data isolation.
In the Project navigator, select the View Stack folder. Then, create a new SwiftUI View file. Name the file NodeMapView.swift .
Locate NodeMapView
, and add these two properties to it:
@ObservedObject var selection: SelectionHandler @Binding var nodes: [Node]
Using @Binding
on nodes
tells SwiftUI that another object will own the node collection and pass it to NodeMapView
.
Next, replace the template implementation of body
with this:
ZStack { ForEach(nodes, id: \.visualID) { node in NodeView(node: node, selection: self.selection) .offset(x: node.position.x, y: node.position.y) .onTapGesture { self.selection.selectNode(node) } } }
Examine the body
of NodeMapView
. You’re creating a ZStack
of nodes and applying an offset
to each node to position the node on the surface. Each node also gains an action to perform when you tap it.
Finally, locate NodeMapView_Previews
and add these properties to it:
static let node1 = Node(position: CGPoint(x: -100, y: -30), text: "hello") static let node2 = Node(position: CGPoint(x: 100, y: 30), text: "world") @State static var nodes = [node1, node2]
And replace the implementation of previews
with this:
let selection = SelectionHandler() return NodeMapView(selection: selection, nodes: $nodes)
Notice how you use the $nodes
syntax to pass a type of Binding
. Placing the mock node array outside previews
as a @State
allows you to create this binding.
Refresh the canvas and you’ll see two nodes side by side. Place the canvas into Live Preview mode by pressing the Play button. The selection logic is now interactive, and touching either node will display a red border.
Creating the Edges’ Layer
Now, you’ll create a layer to display all the edges.
In the Project navigator, select the View Stack folder. Then, create a new SwiftUI View file. Name the file EdgeMapView.swift .
Add this property to EdgeMapView
:
@Binding var edges: [EdgeProxy]
Replace the body
implementation with this:
ZStack { ForEach(edges) { edge in EdgeView(edge: edge) .stroke(Color.black, lineWidth: 3.0) } }
Notice that each edge in the array has a black stroke.
Add these properties to EdgeMapView_Previews
:
static let proxy1 = EdgeProxy( id: EdgeID(), start: .zero, end: CGPoint(x: -100, y: 30)) static let proxy2 = EdgeProxy( id: EdgeID(), start: .zero, end: CGPoint(x: 100, y: 30)) @State static var edges = [proxy1, proxy2]
Replace previews
‘ implementation with this line:
EdgeMapView(edges: $edges)
Again, you create a @State
property to pass the mock data to the preview of EdgeMapView
. Your preview will display the two edges:
OK, get excited because you’re almost there! Now, you’ll combine the two layers to form the finished view.
Creating the MapView
You’re going to place one layer on top of the other to create the finished view.
In the project navigator, select View Stack and create a new SwiftUI View file. Name the file MapView.swift .
Add these two properties to MapView
:
@ObservedObject var selection: SelectionHandler @ObservedObject var mesh: Mesh
Again, you have a reference to a SelectionHandler
here. For the first time, you bring an instance of Mesh
into the view system.
Replace the body
implementation with this:
ZStack { Rectangle().fill(Color.orange) EdgeMapView(edges: $mesh.links) NodeMapView(selection: selection, nodes: $mesh.nodes) }
Finally putting all the different views together. You start with an orange rectangle, stack the edges on top of it and, finally, stack the nodes. The orange rectangle helps you see what’s happening to your view.
Notice how you bind only the relevant parts of mesh
to EdgeMapView
and NodeMapView
using the $
notation.
Locate MapView_Previews
, and replace the code in previews
with this implementation:
let mesh = Mesh() let child1 = Node(position: CGPoint(x: 100, y: 200), text: "child 1") let child2 = Node(position: CGPoint(x: -100, y: 200), text: "child 2") [child1, child2].forEach { mesh.addNode($0) mesh.connect(mesh.rootNode(), to: $0) } mesh.connect(child1, to: child2) let selection = SelectionHandler() return MapView(selection: selection, mesh: mesh)
You create two nodes and add them to a Mesh
. Then, you create edges between nodes. Click Resume in the preview pane, and your canvas should now display three nodes with links between them.
In RazeMind
‘s specific case, a Mesh
always has a root node.
That’s it. Your core map view is complete. Now, you’ll start to add some drag interactions.
Dragging Nodes
In this section, you’ll add the drag gestures so you can move your NodeView
around the screen. You’ll also add the ability to pan the MapView
.
In the project navigator, select View Stack . Then create a new SwiftUI View file and name it SurfaceView.swift .
Inside SurfaceView
, add these properties:
@ObservedObject var mesh: Mesh @ObservedObject var selection: SelectionHandler //dragging @State var portalPosition: CGPoint = .zero @State var dragOffset: CGSize = .zero @State var isDragging: Bool = false @State var isDraggingMesh: Bool = false //zooming @State var zoomScale: CGFloat = 1.0 @State var initialZoomScale: CGFloat? @State var initialPortalPosition: CGPoint?
Locate SurfaceView_Previews
and replace the implementation of previews
with this:
let mesh = Mesh.sampleMesh() let selection = SelectionHandler() return SurfaceView(mesh: mesh, selection: selection)
The @State
variables that you added to SurfaceView
keep track of the drag and magnification gestures you’re about to create.
Note that this section only deals with dragging. In the next section, you’ll tackle zooming the MapView
. But before you set up the dragging actions using a DragGesture
, you need to add a little infrastructure.
Getting Ready to Drag
In SurfaceView_Previews
, you instantiate a pre-made mesh and assign that mesh to SurfaceView
.
Replace the body
implementation inside SurfaceView
with this code:
VStack { // 1 Text("drag offset = w:\(dragOffset.width), h:\(dragOffset.height)") Text("portal offset = x:\(portalPosition.x), y:\(portalPosition.y)") Text("zoom = \(zoomScale)") //<-- insert TextField here // 2 GeometryReader { geometry in // 3 ZStack { Rectangle().fill(Color.yellow) MapView(selection: self.selection, mesh: self.mesh) //<-- insert scale here later // 4 .offset( x: self.portalPosition.x + self.dragOffset.width, y: self.portalPosition.y + self.dragOffset.height) .animation(.easeIn) } //<-- add drag gesture later } }
Here, you've created a VStack
of four views.
-
You have three
Text
elements that display some information about the state. -
GeometryReader
provides information about the size of the containingVStack
. -
Inside the
GeometryReader
, you have aZStack
that contains a yellow background and aMapView
. -
MapView
is offset from the center ofSurfaceView
by a combination ofdragOffset
andportalPosition
. TheMapView
also has a basic animation that makes changes look pretty and silky-smooth.
Your view preview looks like this now:
Handling Changes to the Drag State
Now, you need a little help to process changes to the drag state. Add this extension
to the end of SurfaceView.swift
:
private extension SurfaceView { // 1 func distance(from pointA: CGPoint, to pointB: CGPoint) -> CGFloat { let xdelta = pow(pointA.x - pointB.x, 2) let ydelta = pow(pointA.y - pointB.y, 2) return sqrt(xdelta + ydelta) } // 2 func hitTest(point: CGPoint, parent: CGSize) -> Node? { for node in mesh.nodes { let endPoint = node.position .scaledFrom(zoomScale) .alignCenterInParent(parent) .translatedBy(x: portalPosition.x, y: portalPosition.y) let dist = distance(from: point, to: endPoint) / zoomScale //3 if dist < NodeView.width / 2.0 { return node } } return nil } // 4 func processNodeTranslation(_ translation: CGSize) { guard !selection.draggingNodes.isEmpty else { return } let scaledTranslation = translation.scaledDownTo(zoomScale) mesh.processNodeTranslation( scaledTranslation, nodes: selection.draggingNodes) } }
This extension provides some low-level helper methods for asking questions about the drag action.
-
The helper
distance(from:to:)
is an implementation of the Pythagorean theorem . It calculates the distance between two points. -
In
hitTest(point:parent:)
, you convert a point in the reference system ofSurfaceView
to the reference system ofMapView
. The conversion uses the currentzoomScale
, the size ofSurfaceView
and the current offset ofMapView
. -
If the distance between the
position
of aNode
and the input point is less than the radius ofNodeView
, then the touched point is inside theNodeView
. -
processNodeTranslation(_:)
uses the currentzoomScale
to scale the translation. It then asks theMesh
to move nodes using information fromSelectionHandler
. -
In
processDragChange(_:containerSize:)
, you figure out if this is the first change notification received. There are two possible drag actions in this view: You can drag aNodeView
or you can drag the entireMapView
, changing which part of theMapView
will be shown. You usehitTest(point:parent:)
to determine which action is appropriate. -
If you're dragging a node, you ask the
SelectionHandler
to start the drag action for the selected nodes.SelectionHandler
stores a reference to the node and the initial position of the node. -
You apply the drag
translation
value todragOffset
if panning theMapView
— or pass the translation toprocessNodeTranslation(_:)
. -
processDragEnd(_:)
takes the final translation value and applies that value to the dragged nodes or panned map. It then resets the tracking properties for next time. -
Scales a
CGPoint
value. -
Makes sure that the calculated scale is between
0.1
and2.0
. -
Uses the two methods below to adjust
zoomScale
andportalPosition
. -
Store the initial
zoomScale
andportalPosition
on the first change notification. Then, pass the change toprocessScaleChange(_:)
. -
Apply the last change and reset the tracking variables to
nil
.
Start working your way up the processing stack and add these methods inside the same extension:
func processDragChange(_ value: DragGesture.Value, containerSize: CGSize) { // 1 if !isDragging { isDragging = true if let node = hitTest( point: value.startLocation, parent: containerSize ) { isDraggingMesh = false selection.selectNode(node) // 2 selection.startDragging(mesh) } else { isDraggingMesh = true } } // 3 if isDraggingMesh { dragOffset = value.translation } else { processNodeTranslation(value.translation) } } // 4 func processDragEnd(_ value: DragGesture.Value) { isDragging = false dragOffset = .zero if isDraggingMesh { portalPosition = CGPoint( x: portalPosition.x + value.translation.width, y: portalPosition.y + value.translation.height) } else { processNodeTranslation(value.translation) selection.stopDragging(mesh) } }
These methods do the work of turning the drag actions into changes in the Mesh
.
Adding a Drag Gesture
You can now add a DragGesture
to SurfaceView
. In body
, look for the comment line add drag gesture later
. Delete the line, and add this modifier:
.gesture(DragGesture() .onChanged { value in self.processDragChange(value, containerSize: geometry.size) } .onEnded { value in self.processDragEnd(value) }) //<-- add magnification gesture later
Here, you add a DragGesture
to the ZStack
that contains MapView
. The gesture hands off the state changes of onChanged
and onEnded
to the methods you added previously.
That's a big set of code to get your head around, so now's a good time to play with what you've created. Refresh the canvas and enter preview mode by pressing the Play button.
Testing Your Code
Drag any NodeView
by starting your drag action on top of the NodeView
; drag the MapView
by starting your drag anywhere else. See how the origin of the orange MapView
changes. The text at the top of the view will give you a numerical sense of what's happening.
You'll notice the links between nodes aren't animating. NodeView
instances animate, but EdgeView
's' don't. You'll fix this soon.
Scaling the MapView
You did most of the ground work for magnification in the previous section. The drag helper functions already use the value of zoomScale
. So, all that's left is to add a MagnificationGesture
to manipulate zoomScale
and to apply that scaling to MapView
.
First, add the following method to the private SurfaceView
extension from earlier:
// 1 func scaledOffset(_ scale: CGFloat, initialValue: CGPoint) -> CGPoint { let newx = initialValue.x*scale let newy = initialValue.y*scale return CGPoint(x: newx, y: newy) } func clampedScale(_ scale: CGFloat, initialValue: CGFloat?) -> (scale: CGFloat, didClamp: Bool) { let minScale: CGFloat = 0.1 let maxScale: CGFloat = 2.0 let raw = scale.magnitude * (initialValue ?? maxScale) let value = max(minScale, min(maxScale, raw)) let didClamp = raw != value return (value, didClamp) } func processScaleChange(_ value: CGFloat) { let clamped = clampedScale(value, initialValue: initialZoomScale) zoomScale = clamped.scale if !clamped.didClamp, let point = initialPortalPosition { portalPosition = scaledOffset(value, initialValue: point) } }
You only want to modify the portalPosition
when you also modify zoomScale
. So, you pass didClamp
in the return tuple value of clampedScale(_:initialValue:)
back to processScaleChange(_:)
.
Now, add a MagnificationGesture
to the ZStack
that contains MapView
. Locate the marker comment line add magnification gesture later
. Delete the comment and replace it with this:
.gesture(MagnificationGesture() .onChanged { value in // 1 if self.initialZoomScale == nil { self.initialZoomScale = self.zoomScale self.initialPortalPosition = self.portalPosition } self.processScaleChange(value) } .onEnded { value in // 2 self.processScaleChange(value) self.initialZoomScale = nil self.initialPortalPosition = nil })
Here's what's going on in this code:
The last thing to do is use the zoomScale
on MapView
. In the body
property of SurfaceView
, locate the comment line insert scale here later
. Delete the comment and replace it with this:
.scaleEffect(self.zoomScale)
Finally, you'll crank up the preview display to 11.
In SurfaceView_Previews
, locate the line:
let mesh = Mesh.sampleMesh()
And replace it with:
let mesh = Mesh.sampleProceduralMesh()
This action creates a much larger, randomly-generated mesh for the preview.
Refresh the canvas and place it in Live Preview mode.
Now, when you use a pinch gesture on the screen, the entire orange MapView
will scale itself up and down around the center of the screen. You can also drag the nodes outside of the orange bounds. Artificial borders cannot contain you. :]
Animating the Links
You've already seen that EdgeView
doesn't participate in the animation cycle when you drag a NodeView
. To fix this, you need to give the rendering system information about how to animate the EdgeView
.
Open EdgeView.swift .
EdgeView
is a Shape
and Shape
conforms to Animatable
. The declaration for Animatable
is:
/// A type that can be animated public protocol Animatable { /// The type defining the data to be animated. associatedtype AnimatableData : VectorArithmetic /// The data to be animated. var animatableData: Self.AnimatableData { get set } }
You need to supply a value that conforms to VectorArithmetic
in the property animatableData
.
Using Animatable Pairs
You have four values to animate.
So, how do you do that? You need an animatable pear.
Well, actually, you need an AnimatablePair
. Since you have four values, you want a pair of pairs. Think of it as a system without peer, if you will. :]
Add the following type declarations below import SwiftUI
in EdgeView.swift
:
typealias AnimatablePoint = AnimatablePair<CGFloat, CGFloat> typealias AnimatableCorners = AnimatablePair<AnimatablePoint, AnimatablePoint>
This declaration bundles up the two typed pairs into one name AnimatableCorners
. Types in AnimatablePair
must conform to VectorArithmetic
. CGPoint
doesn't conform to VectorArithmetic
, which is why you break the two endpoints into their CGFloat
components.
Inside EdgeView
, add this code to the end of the struct
:
var animatableData: AnimatableCorners { get { return AnimatablePair( AnimatablePair(startx, starty), AnimatablePair(endx, endy)) } set { startx = newValue.first.first starty = newValue.first.second endx = newValue.second.first endy = newValue.second.second } }
Here, you define animatableData
as an instance of AnimatableCorners
and construct the nested pairs. SwiftUI now knows how to animate EdgeView
.
Open SurfaceView.swift and refresh the canvas. Try dragging the nodes around now and you'll see that the links animate in sync with the nodes.
You now have a working 2D infinite surface renderer that handles zooming and panning!
Note : SwiftUI is a declarative API — you describe what you want to draw and the framework draws it on the device. Unlike UIKit, you don't need to worry about memory management of views and layers or queuing and dequeuing views.
Editing the View
So far, you've used pre-defined models, but an effective UI should allow the user to edit the model. You've shown that you can edit the position of the node, but what about the text?
In this section, you'll add a TextField
to the interface.
Still in SurfaceView.swift
, locate the comment insert TextField here
in the body
of SurfaceView
and add this code to define a field for editing:
TextField("Breathe…", text: $selection.editingText, onCommit: { if let node = self.selection.onlySelectedNode(in: self.mesh) { self.mesh.updateNodeText(node, string: self.self.selection.editingText) } })
Again, SelectionHandler
acts as the persistent memory for the view. You bind editingText
to TextField
.
Refresh the canvas and start Live Preview mode. Edit a node by tapping it then editing the TextField
at the top of the window. Your edits will display in the view when you press Return
.
Congratulations, you've dealt with all the major hurdles in creating a visual UI. Give yourself a round of applause!
Building the application
The final act for you is to put your work into a running application. In the project navigator locate the folder Infrastructure and open the file SceneDelegate.swift .
Find the line:
let contentView = BoringListView(mesh: mesh, selection: selection)
and replace the line with your work:
let contentView = SurfaceView(mesh: mesh, selection: selection)
Build and run and you can play with your app on a real device.
Where to Go From Here?
You can download the completed version of the project using the Download Materials button at the top or bottom of this tutorial.
You've just built the core of a draggable spatial UI. In doing so, you've covered:
- Panning with drag gestures and creating movable content.
- Using hit testing to make decisions about a drag response.
- Magnification and its effects on the coordinate system.
- Providing animatable data for types that don't natively support animation.
This interface type can be fun and useful for your users, but be sure to consider whether it's appropriate when you create one. Not all situations demand a spatial UI.
If you want to learn more about SwiftUI and SwiftUI animations, check out ourSwiftUI by Tutorials book and theSwiftUI video course.
If you have any questions, be sure to leave them in the comments below.
以上所述就是小编给大家介绍的《Creating a Mind-Map UI in SwiftUI [FREE]》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
The Creative Curve
Allen Gannett / Knopf Doubleday Publishing Group / 2018-6-12
Big data entrepreneur Allen Gannett overturns the mythology around creative genius, and reveals the science and secrets behind achieving breakout commercial success in any field. We have been s......一起来看看 《The Creative Curve》 这本书的介绍吧!