wgpu_core/
hub.rs

1/*! Allocating resource ids, and tracking the resources they refer to.
2
3The `wgpu_core` API uses identifiers of type [`Id<R>`] to refer to
4resources of type `R`. For example, [`id::DeviceId`] is an alias for
5`Id<markers::Device>`, and [`id::BufferId`] is an alias for
6`Id<markers::Buffer>`. `Id` implements `Copy`, `Hash`, `Eq`, `Ord`, and
7of course `Debug`.
8
9[`id::DeviceId`]: crate::id::DeviceId
10[`id::BufferId`]: crate::id::BufferId
11
12Each `Id` contains not only an index for the resource it denotes but
13also a Backend indicating which `wgpu` backend it belongs to. You
14can use the [`gfx_select`] macro to dynamically dispatch on an id's
15backend to a function specialized at compile time for a specific
16backend. See that macro's documentation for details.
17
18`Id`s also incorporate a generation number, for additional validation.
19
20The resources to which identifiers refer are freed explicitly.
21Attempting to use an identifier for a resource that has been freed
22elicits an error result.
23
24## Assigning ids to resources
25
26The users of `wgpu_core` generally want resource ids to be assigned
27in one of two ways:
28
29- Users like `wgpu` want `wgpu_core` to assign ids to resources itself.
30  For example, `wgpu` expects to call `Global::device_create_buffer`
31  and have the return value indicate the newly created buffer's id.
32
33- Users like `player` and Firefox want to allocate ids themselves, and
34  pass `Global::device_create_buffer` and friends the id to assign the
35  new resource.
36
37To accommodate either pattern, `wgpu_core` methods that create
38resources all expect an `id_in` argument that the caller can use to
39specify the id, and they all return the id used. For example, the
40declaration of `Global::device_create_buffer` looks like this:
41
42```ignore
43impl Global {
44    /* ... */
45    pub fn device_create_buffer<A: HalApi>(
46        &self,
47        device_id: id::DeviceId,
48        desc: &resource::BufferDescriptor,
49        id_in: Input<G>,
50    ) -> (id::BufferId, Option<resource::CreateBufferError>) {
51        /* ... */
52    }
53    /* ... */
54}
55```
56
57Users that want to assign resource ids themselves pass in the id they
58want as the `id_in` argument, whereas users that want `wgpu_core`
59itself to choose ids always pass `()`. In either case, the id
60ultimately assigned is returned as the first element of the tuple.
61
62Producing true identifiers from `id_in` values is the job of an
63[`crate::identity::IdentityManager`] or ids will be received from outside through `Option<Id>` arguments.
64
65## Id allocation and streaming
66
67Perhaps surprisingly, allowing users to assign resource ids themselves
68enables major performance improvements in some applications.
69
70The `wgpu_core` API is designed for use by Firefox's [WebGPU]
71implementation. For security, web content and GPU use must be kept
72segregated in separate processes, with all interaction between them
73mediated by an inter-process communication protocol. As web content uses
74the WebGPU API, the content process sends messages to the GPU process,
75which interacts with the platform's GPU APIs on content's behalf,
76occasionally sending results back.
77
78In a classic Rust API, a resource allocation function takes parameters
79describing the resource to create, and if creation succeeds, it returns
80the resource id in a `Result::Ok` value. However, this design is a poor
81fit for the split-process design described above: content must wait for
82the reply to its buffer-creation message (say) before it can know which
83id it can use in the next message that uses that buffer. On a common
84usage pattern, the classic Rust design imposes the latency of a full
85cross-process round trip.
86
87We can avoid incurring these round-trip latencies simply by letting the
88content process assign resource ids itself. With this approach, content
89can choose an id for the new buffer, send a message to create the
90buffer, and then immediately send the next message operating on that
91buffer, since it already knows its id. Allowing content and GPU process
92activity to be pipelined greatly improves throughput.
93
94To help propagate errors correctly in this style of usage, when resource
95creation fails, the id supplied for that resource is marked to indicate
96as much, allowing subsequent operations using that id to be properly
97flagged as errors as well.
98
99[`gfx_select`]: crate::gfx_select
100[`process`]: crate::identity::IdentityManager::process
101[`Id<R>`]: crate::id::Id
102[wrapped in a mutex]: trait.IdentityHandler.html#impl-IdentityHandler%3CI%3E-for-Mutex%3CIdentityManager%3E
103[WebGPU]: https://www.w3.org/TR/webgpu/
104
105*/
106
107use crate::{
108    binding_model::{BindGroup, BindGroupLayout, PipelineLayout},
109    command::{CommandBuffer, RenderBundle},
110    device::{queue::Queue, Device},
111    hal_api::HalApi,
112    instance::{Adapter, Surface},
113    pipeline::{ComputePipeline, RenderPipeline, ShaderModule},
114    registry::{Registry, RegistryReport},
115    resource::{Buffer, QuerySet, Sampler, StagingBuffer, Texture, TextureView},
116    storage::{Element, Storage},
117};
118use std::fmt::Debug;
119
120#[derive(Debug, PartialEq, Eq)]
121pub struct HubReport {
122    pub adapters: RegistryReport,
123    pub devices: RegistryReport,
124    pub queues: RegistryReport,
125    pub pipeline_layouts: RegistryReport,
126    pub shader_modules: RegistryReport,
127    pub bind_group_layouts: RegistryReport,
128    pub bind_groups: RegistryReport,
129    pub command_buffers: RegistryReport,
130    pub render_bundles: RegistryReport,
131    pub render_pipelines: RegistryReport,
132    pub compute_pipelines: RegistryReport,
133    pub query_sets: RegistryReport,
134    pub buffers: RegistryReport,
135    pub textures: RegistryReport,
136    pub texture_views: RegistryReport,
137    pub samplers: RegistryReport,
138}
139
140impl HubReport {
141    pub fn is_empty(&self) -> bool {
142        self.adapters.is_empty()
143    }
144}
145
146#[allow(rustdoc::private_intra_doc_links)]
147/// All the resources for a particular backend in a [`crate::global::Global`].
148///
149/// To obtain `global`'s `Hub` for some [`HalApi`] backend type `A`,
150/// call [`A::hub(global)`].
151///
152/// ## Locking
153///
154/// Each field in `Hub` is a [`Registry`] holding all the values of a
155/// particular type of resource, all protected by a single RwLock.
156/// So for example, to access any [`Buffer`], you must acquire a read
157/// lock on the `Hub`s entire buffers registry. The lock guard
158/// gives you access to the `Registry`'s [`Storage`], which you can
159/// then index with the buffer's id. (Yes, this design causes
160/// contention; see [#2272].)
161///
162/// But most `wgpu` operations require access to several different
163/// kinds of resource, so you often need to hold locks on several
164/// different fields of your [`Hub`] simultaneously.
165///
166/// Inside the `Registry` there are `Arc<T>` where `T` is a Resource
167/// Lock of `Registry` happens only when accessing to get the specific resource
168///
169///
170/// [`A::hub(global)`]: HalApi::hub
171pub struct Hub<A: HalApi> {
172    pub(crate) adapters: Registry<Adapter<A>>,
173    pub(crate) devices: Registry<Device<A>>,
174    pub(crate) queues: Registry<Queue<A>>,
175    pub(crate) pipeline_layouts: Registry<PipelineLayout<A>>,
176    pub(crate) shader_modules: Registry<ShaderModule<A>>,
177    pub(crate) bind_group_layouts: Registry<BindGroupLayout<A>>,
178    pub(crate) bind_groups: Registry<BindGroup<A>>,
179    pub(crate) command_buffers: Registry<CommandBuffer<A>>,
180    pub(crate) render_bundles: Registry<RenderBundle<A>>,
181    pub(crate) render_pipelines: Registry<RenderPipeline<A>>,
182    pub(crate) compute_pipelines: Registry<ComputePipeline<A>>,
183    pub(crate) query_sets: Registry<QuerySet<A>>,
184    pub(crate) buffers: Registry<Buffer<A>>,
185    pub(crate) staging_buffers: Registry<StagingBuffer<A>>,
186    pub(crate) textures: Registry<Texture<A>>,
187    pub(crate) texture_views: Registry<TextureView<A>>,
188    pub(crate) samplers: Registry<Sampler<A>>,
189}
190
191impl<A: HalApi> Hub<A> {
192    fn new() -> Self {
193        Self {
194            adapters: Registry::new(A::VARIANT),
195            devices: Registry::new(A::VARIANT),
196            queues: Registry::new(A::VARIANT),
197            pipeline_layouts: Registry::new(A::VARIANT),
198            shader_modules: Registry::new(A::VARIANT),
199            bind_group_layouts: Registry::new(A::VARIANT),
200            bind_groups: Registry::new(A::VARIANT),
201            command_buffers: Registry::new(A::VARIANT),
202            render_bundles: Registry::new(A::VARIANT),
203            render_pipelines: Registry::new(A::VARIANT),
204            compute_pipelines: Registry::new(A::VARIANT),
205            query_sets: Registry::new(A::VARIANT),
206            buffers: Registry::new(A::VARIANT),
207            staging_buffers: Registry::new(A::VARIANT),
208            textures: Registry::new(A::VARIANT),
209            texture_views: Registry::new(A::VARIANT),
210            samplers: Registry::new(A::VARIANT),
211        }
212    }
213
214    //TODO: instead of having a hacky `with_adapters` parameter,
215    // we should have `clear_device(device_id)` that specifically destroys
216    // everything related to a logical device.
217    pub(crate) fn clear(&self, surface_guard: &Storage<Surface>, with_adapters: bool) {
218        use hal::Surface;
219
220        let mut devices = self.devices.write();
221        for element in devices.map.iter() {
222            if let Element::Occupied(ref device, _) = *element {
223                device.prepare_to_die();
224            }
225        }
226
227        self.command_buffers.write().map.clear();
228        self.samplers.write().map.clear();
229        self.texture_views.write().map.clear();
230        self.textures.write().map.clear();
231        self.buffers.write().map.clear();
232        self.bind_groups.write().map.clear();
233        self.shader_modules.write().map.clear();
234        self.bind_group_layouts.write().map.clear();
235        self.pipeline_layouts.write().map.clear();
236        self.compute_pipelines.write().map.clear();
237        self.render_pipelines.write().map.clear();
238        self.query_sets.write().map.clear();
239
240        for element in surface_guard.map.iter() {
241            if let Element::Occupied(ref surface, _epoch) = *element {
242                if let Some(ref mut present) = surface.presentation.lock().take() {
243                    if let Some(device) = present.device.downcast_ref::<A>() {
244                        let suf = A::surface_as_hal(surface);
245                        unsafe {
246                            suf.unwrap().unconfigure(device.raw());
247                            //TODO: we could destroy the surface here
248                        }
249                    }
250                }
251            }
252        }
253
254        self.queues.write().map.clear();
255        devices.map.clear();
256
257        if with_adapters {
258            drop(devices);
259            self.adapters.write().map.clear();
260        }
261    }
262
263    pub(crate) fn surface_unconfigure(&self, device: &Device<A>, surface: &A::Surface) {
264        unsafe {
265            use hal::Surface;
266            surface.unconfigure(device.raw());
267        }
268    }
269
270    pub fn generate_report(&self) -> HubReport {
271        HubReport {
272            adapters: self.adapters.generate_report(),
273            devices: self.devices.generate_report(),
274            queues: self.queues.generate_report(),
275            pipeline_layouts: self.pipeline_layouts.generate_report(),
276            shader_modules: self.shader_modules.generate_report(),
277            bind_group_layouts: self.bind_group_layouts.generate_report(),
278            bind_groups: self.bind_groups.generate_report(),
279            command_buffers: self.command_buffers.generate_report(),
280            render_bundles: self.render_bundles.generate_report(),
281            render_pipelines: self.render_pipelines.generate_report(),
282            compute_pipelines: self.compute_pipelines.generate_report(),
283            query_sets: self.query_sets.generate_report(),
284            buffers: self.buffers.generate_report(),
285            textures: self.textures.generate_report(),
286            texture_views: self.texture_views.generate_report(),
287            samplers: self.samplers.generate_report(),
288        }
289    }
290}
291
292pub struct Hubs {
293    #[cfg(vulkan)]
294    pub(crate) vulkan: Hub<hal::api::Vulkan>,
295    #[cfg(metal)]
296    pub(crate) metal: Hub<hal::api::Metal>,
297    #[cfg(dx12)]
298    pub(crate) dx12: Hub<hal::api::Dx12>,
299    #[cfg(gles)]
300    pub(crate) gl: Hub<hal::api::Gles>,
301    #[cfg(all(not(vulkan), not(metal), not(dx12), not(gles)))]
302    pub(crate) empty: Hub<hal::api::Empty>,
303}
304
305impl Hubs {
306    pub(crate) fn new() -> Self {
307        Self {
308            #[cfg(vulkan)]
309            vulkan: Hub::new(),
310            #[cfg(metal)]
311            metal: Hub::new(),
312            #[cfg(dx12)]
313            dx12: Hub::new(),
314            #[cfg(gles)]
315            gl: Hub::new(),
316            #[cfg(all(not(vulkan), not(metal), not(dx12), not(gles)))]
317            empty: Hub::new(),
318        }
319    }
320}