X-Git-Url: http://git.cascardo.info/?p=cascardo%2Flinux.git;a=blobdiff_plain;f=Documentation%2FDocBook%2Fdrm.tmpl;h=4b592ffbafeec8053e09c0c068045505e05757ec;hp=bacefc5b222ec0dff365e83c63191859d5803a22;hb=64bb1b944b554a751b518b09c3d596f6b6c0ce31;hpb=ac14ae25b676d721b6bfcfb046dc53a9f7760d83
diff --git a/Documentation/DocBook/drm.tmpl b/Documentation/DocBook/drm.tmpl
index bacefc5b222e..4b592ffbafee 100644
--- a/Documentation/DocBook/drm.tmpl
+++ b/Documentation/DocBook/drm.tmpl
@@ -291,10 +291,9 @@ char *date;
Device Registration
A number of functions are provided to help with device registration.
- The functions deal with PCI, USB and platform devices, respectively.
+ The functions deal with PCI and platform devices, respectively.
!Edrivers/gpu/drm/drm_pci.c
-!Edrivers/gpu/drm/drm_usb.c
!Edrivers/gpu/drm/drm_platform.c
New drivers that no longer rely on the services provided by the
@@ -493,10 +492,10 @@ char *date;
The Translation Table Manager (TTM)
- TTM design background and information belongs here.
+ TTM design background and information belongs here.
- TTM initialization
+ TTM initialization
This section is outdated.
Drivers wishing to support TTM must fill out a drm_bo_driver
@@ -504,42 +503,42 @@ char *date;
pointers for initializing the TTM, allocating and freeing memory,
waiting for command completion and fence synchronization, and memory
migration. See the radeon_ttm.c file for an example of usage.
-
-
- The ttm_global_reference structure is made up of several fields:
-
-
- struct ttm_global_reference {
- enum ttm_global_types global_type;
- size_t size;
- void *object;
- int (*init) (struct ttm_global_reference *);
- void (*release) (struct ttm_global_reference *);
- };
-
-
- There should be one global reference structure for your memory
- manager as a whole, and there will be others for each object
- created by the memory manager at runtime. Your global TTM should
- have a type of TTM_GLOBAL_TTM_MEM. The size field for the global
- object should be sizeof(struct ttm_mem_global), and the init and
- release hooks should point at your driver-specific init and
- release routines, which probably eventually call
- ttm_mem_global_init and ttm_mem_global_release, respectively.
-
-
- Once your global TTM accounting structure is set up and initialized
- by calling ttm_global_item_ref() on it,
- you need to create a buffer object TTM to
- provide a pool for buffer object allocation by clients and the
- kernel itself. The type of this object should be TTM_GLOBAL_TTM_BO,
- and its size should be sizeof(struct ttm_bo_global). Again,
- driver-specific init and release functions may be provided,
- likely eventually calling ttm_bo_global_init() and
- ttm_bo_global_release(), respectively. Also, like the previous
- object, ttm_global_item_ref() is used to create an initial reference
- count for the TTM, which will call your initialization function.
-
+
+
+ The ttm_global_reference structure is made up of several fields:
+
+
+ struct ttm_global_reference {
+ enum ttm_global_types global_type;
+ size_t size;
+ void *object;
+ int (*init) (struct ttm_global_reference *);
+ void (*release) (struct ttm_global_reference *);
+ };
+
+
+ There should be one global reference structure for your memory
+ manager as a whole, and there will be others for each object
+ created by the memory manager at runtime. Your global TTM should
+ have a type of TTM_GLOBAL_TTM_MEM. The size field for the global
+ object should be sizeof(struct ttm_mem_global), and the init and
+ release hooks should point at your driver-specific init and
+ release routines, which probably eventually call
+ ttm_mem_global_init and ttm_mem_global_release, respectively.
+
+
+ Once your global TTM accounting structure is set up and initialized
+ by calling ttm_global_item_ref() on it,
+ you need to create a buffer object TTM to
+ provide a pool for buffer object allocation by clients and the
+ kernel itself. The type of this object should be TTM_GLOBAL_TTM_BO,
+ and its size should be sizeof(struct ttm_bo_global). Again,
+ driver-specific init and release functions may be provided,
+ likely eventually calling ttm_bo_global_init() and
+ ttm_bo_global_release(), respectively. Also, like the previous
+ object, ttm_global_item_ref() is used to create an initial reference
+ count for the TTM, which will call your initialization function.
+
@@ -567,19 +566,19 @@ char *date;
using driver-specific ioctls.
- On a fundamental level, GEM involves several operations:
-
- Memory allocation and freeing
- Command execution
- Aperture management at command execution time
-
- Buffer object allocation is relatively straightforward and largely
+ On a fundamental level, GEM involves several operations:
+
+ Memory allocation and freeing
+ Command execution
+ Aperture management at command execution time
+
+ Buffer object allocation is relatively straightforward and largely
provided by Linux's shmem layer, which provides memory to back each
object.
Device-specific operations, such as command execution, pinning, buffer
- read & write, mapping, and domain ownership transfers are left to
+ read & write, mapping, and domain ownership transfers are left to
driver-specific ioctls.
@@ -739,16 +738,16 @@ char *date;
respectively. The conversion is handled by the DRM core without any
driver-specific support.
-
- GEM also supports buffer sharing with dma-buf file descriptors through
- PRIME. GEM-based drivers must use the provided helpers functions to
- implement the exporting and importing correctly. See .
- Since sharing file descriptors is inherently more secure than the
- easily guessable and global GEM names it is the preferred buffer
- sharing mechanism. Sharing buffers through GEM names is only supported
- for legacy userspace. Furthermore PRIME also allows cross-device
- buffer sharing since it is based on dma-bufs.
-
+
+ GEM also supports buffer sharing with dma-buf file descriptors through
+ PRIME. GEM-based drivers must use the provided helpers functions to
+ implement the exporting and importing correctly. See .
+ Since sharing file descriptors is inherently more secure than the
+ easily guessable and global GEM names it is the preferred buffer
+ sharing mechanism. Sharing buffers through GEM names is only supported
+ for legacy userspace. Furthermore PRIME also allows cross-device
+ buffer sharing since it is based on dma-bufs.
+
GEM Objects Mapping
@@ -853,7 +852,7 @@ char *date;
Command Execution
- Perhaps the most important GEM function for GPU devices is providing a
+ Perhaps the most important GEM function for GPU devices is providing a
command execution interface to clients. Client programs construct
command buffers containing references to previously allocated memory
objects, and then submit them to GEM. At that point, GEM takes care to
@@ -875,95 +874,101 @@ char *date;
GEM Function Reference
!Edrivers/gpu/drm/drm_gem.c
-
-
- VMA Offset Manager
+
+
+ VMA Offset Manager
!Pdrivers/gpu/drm/drm_vma_manager.c vma offset manager
!Edrivers/gpu/drm/drm_vma_manager.c
!Iinclude/drm/drm_vma_manager.h
-
-
- PRIME Buffer Sharing
-
- PRIME is the cross device buffer sharing framework in drm, originally
- created for the OPTIMUS range of multi-gpu platforms. To userspace
- PRIME buffers are dma-buf based file descriptors.
-
-
- Overview and Driver Interface
-
- Similar to GEM global names, PRIME file descriptors are
- also used to share buffer objects across processes. They offer
- additional security: as file descriptors must be explicitly sent over
- UNIX domain sockets to be shared between applications, they can't be
- guessed like the globally unique GEM names.
-
-
- Drivers that support the PRIME
- API must set the DRIVER_PRIME bit in the struct
- drm_driver
- driver_features field, and implement the
- prime_handle_to_fd and
- prime_fd_to_handle operations.
-
-
- int (*prime_handle_to_fd)(struct drm_device *dev,
- struct drm_file *file_priv, uint32_t handle,
- uint32_t flags, int *prime_fd);
+
+
+ PRIME Buffer Sharing
+
+ PRIME is the cross device buffer sharing framework in drm, originally
+ created for the OPTIMUS range of multi-gpu platforms. To userspace
+ PRIME buffers are dma-buf based file descriptors.
+
+
+ Overview and Driver Interface
+
+ Similar to GEM global names, PRIME file descriptors are
+ also used to share buffer objects across processes. They offer
+ additional security: as file descriptors must be explicitly sent over
+ UNIX domain sockets to be shared between applications, they can't be
+ guessed like the globally unique GEM names.
+
+
+ Drivers that support the PRIME
+ API must set the DRIVER_PRIME bit in the struct
+ drm_driver
+ driver_features field, and implement the
+ prime_handle_to_fd and
+ prime_fd_to_handle operations.
+
+
+ int (*prime_handle_to_fd)(struct drm_device *dev,
+ struct drm_file *file_priv, uint32_t handle,
+ uint32_t flags, int *prime_fd);
int (*prime_fd_to_handle)(struct drm_device *dev,
- struct drm_file *file_priv, int prime_fd,
- uint32_t *handle);
- Those two operations convert a handle to a PRIME file descriptor and
- vice versa. Drivers must use the kernel dma-buf buffer sharing framework
- to manage the PRIME file descriptors. Similar to the mode setting
- API PRIME is agnostic to the underlying buffer object manager, as
- long as handles are 32bit unsigned integers.
-
-
- While non-GEM drivers must implement the operations themselves, GEM
- drivers must use the drm_gem_prime_handle_to_fd
- and drm_gem_prime_fd_to_handle helper functions.
- Those helpers rely on the driver
- gem_prime_export and
- gem_prime_import operations to create a dma-buf
- instance from a GEM object (dma-buf exporter role) and to create a GEM
- object from a dma-buf instance (dma-buf importer role).
-
-
- struct dma_buf * (*gem_prime_export)(struct drm_device *dev,
- struct drm_gem_object *obj,
- int flags);
+ struct drm_file *file_priv, int prime_fd,
+ uint32_t *handle);
+ Those two operations convert a handle to a PRIME file descriptor and
+ vice versa. Drivers must use the kernel dma-buf buffer sharing framework
+ to manage the PRIME file descriptors. Similar to the mode setting
+ API PRIME is agnostic to the underlying buffer object manager, as
+ long as handles are 32bit unsigned integers.
+
+
+ While non-GEM drivers must implement the operations themselves, GEM
+ drivers must use the drm_gem_prime_handle_to_fd
+ and drm_gem_prime_fd_to_handle helper functions.
+ Those helpers rely on the driver
+ gem_prime_export and
+ gem_prime_import operations to create a dma-buf
+ instance from a GEM object (dma-buf exporter role) and to create a GEM
+ object from a dma-buf instance (dma-buf importer role).
+
+
+ struct dma_buf * (*gem_prime_export)(struct drm_device *dev,
+ struct drm_gem_object *obj,
+ int flags);
struct drm_gem_object * (*gem_prime_import)(struct drm_device *dev,
- struct dma_buf *dma_buf);
- These two operations are mandatory for GEM drivers that support
- PRIME.
-
-
-
- PRIME Helper Functions
-!Pdrivers/gpu/drm/drm_prime.c PRIME Helpers
+ struct dma_buf *dma_buf);
+ These two operations are mandatory for GEM drivers that support
+ PRIME.
+
-
-
- PRIME Function References
+
+ PRIME Helper Functions
+!Pdrivers/gpu/drm/drm_prime.c PRIME Helpers
+
+
+
+ PRIME Function References
!Edrivers/gpu/drm/drm_prime.c
-
-
- DRM MM Range Allocator
-
- Overview
+
+
+ DRM MM Range Allocator
+
+ Overview
!Pdrivers/gpu/drm/drm_mm.c Overview
-
-
- LRU Scan/Eviction Support
+
+
+ LRU Scan/Eviction Support
!Pdrivers/gpu/drm/drm_mm.c lru scan roaster
-
+
-
- DRM MM Range Allocator Function References
+
+ DRM MM Range Allocator Function References
!Edrivers/gpu/drm/drm_mm.c
!Iinclude/drm/drm_mm.h
-
+
+
+ CMA Helper Functions Reference
+!Pdrivers/gpu/drm/drm_gem_cma_helper.c cma helpers
+!Edrivers/gpu/drm/drm_gem_cma_helper.c
+!Iinclude/drm/drm_gem_cma_helper.h
+
@@ -995,6 +1000,10 @@ int max_width, max_height;
Display Modes Function Reference
!Iinclude/drm/drm_modes.h
!Edrivers/gpu/drm/drm_modes.c
+
+
+ Atomic Mode Setting Function Reference
+!Edrivers/gpu/drm/drm_atomic.c
Frame Buffer Creation
@@ -1826,6 +1835,10 @@ void intel_crt_init(struct drm_device *dev)
KMS API Functions
!Edrivers/gpu/drm/drm_crtc.c
+
+
+ KMS Data Structures
+!Iinclude/drm/drm_crtc.h
KMS Locking
@@ -1934,10 +1947,16 @@ void intel_crt_init(struct drm_device *dev)
and then retrieves a list of modes by calling the connector
get_modes helper operation.
+
+ If the helper operation returns no mode, and if the connector status
+ is connector_status_connected, standard VESA DMT modes up to
+ 1024x768 are automatically added to the modes list by a call to
+ drm_add_modes_noedid.
+
- The function filters out modes larger than
+ The function then filters out modes larger than
max_width and max_height
- if specified. It then calls the optional connector
+ if specified. It finally calls the optional connector
mode_valid helper operation for each mode in
the probed list to check whether the mode is valid for the connector.
@@ -2077,11 +2096,19 @@ void intel_crt_init(struct drm_device *dev)
int (*get_modes)(struct drm_connector *connector);
Fill the connector's probed_modes list
- by parsing EDID data with drm_add_edid_modes or
- calling drm_mode_probed_add directly for every
+ by parsing EDID data with drm_add_edid_modes,
+ adding standard VESA DMT modes with drm_add_modes_noedid,
+ or calling drm_mode_probed_add directly for every
supported mode and return the number of modes it has detected. This
operation is mandatory.
+
+ Note that the caller function will automatically add standard VESA
+ DMT modes up to 1024x768 if the get_modes
+ helper operation returns no mode and if the connector status is
+ connector_status_connected. There is no need to call
+ drm_add_edid_modes manually in that case.
+
When adding modes manually the driver creates each mode with a call to
drm_mode_create and must fill the following fields.
@@ -2279,7 +2306,7 @@ void intel_crt_init(struct drm_device *dev)
drm_helper_probe_single_connector_modes.
- When parsing EDID data, drm_add_edid_modes fill the
+ When parsing EDID data, drm_add_edid_modes fills the
connector display_info
width_mm and
height_mm fields. When creating modes
@@ -2316,9 +2343,27 @@ void intel_crt_init(struct drm_device *dev)
+
+ Atomic Modeset Helper Functions Reference
+
+ Overview
+!Pdrivers/gpu/drm/drm_atomic_helper.c overview
+
+
+ Implementing Asynchronous Atomic Commit
+!Pdrivers/gpu/drm/drm_atomic_helper.c implementing async commit
+
+
+ Atomic State Reset and Initialization
+!Pdrivers/gpu/drm/drm_atomic_helper.c atomic state reset and initialization
+
+!Iinclude/drm/drm_atomic_helper.h
+!Edrivers/gpu/drm/drm_atomic_helper.c
+
Modeset Helper Functions Reference
!Edrivers/gpu/drm/drm_crtc_helper.c
+!Pdrivers/gpu/drm/drm_crtc_helper.c overview
Output Probing Helper Functions Reference
@@ -2342,6 +2387,12 @@ void intel_crt_init(struct drm_device *dev)
!Pdrivers/gpu/drm/drm_dp_mst_topology.c dp mst helper
!Iinclude/drm/drm_dp_mst_helper.h
!Edrivers/gpu/drm/drm_dp_mst_topology.c
+
+
+ MIPI DSI Helper Functions Reference
+!Pdrivers/gpu/drm/drm_mipi_dsi.c dsi helpers
+!Iinclude/drm/drm_mipi_dsi.h
+!Edrivers/gpu/drm/drm_mipi_dsi.c
EDID Helper Functions Reference
@@ -2372,7 +2423,12 @@ void intel_crt_init(struct drm_device *dev)
Plane Helper Reference
-!Edrivers/gpu/drm/drm_plane_helper.c Plane Helpers
+!Edrivers/gpu/drm/drm_plane_helper.c
+!Pdrivers/gpu/drm/drm_plane_helper.c overview
+
+
+ Tile group
+!Pdrivers/gpu/drm/drm_crtc.c Tile group
@@ -2508,8 +2564,8 @@ void intel_crt_init(struct drm_device *dev)
Description/Restrictions |
- DRM |
- Generic |
+ DRM |
+ Generic |
âEDIDâ |
BLOB | IMMUTABLE |
0 |
@@ -2524,6 +2580,20 @@ void intel_crt_init(struct drm_device *dev)
Contains DPMS operation mode value. |
+ âPATHâ |
+ BLOB | IMMUTABLE |
+ 0 |
+ Connector |
+ Contains topology path to a connector. |
+
+
+ âTILEâ |
+ BLOB | IMMUTABLE |
+ 0 |
+ Connector |
+ Contains tiling information for a connector. |
+
+
Plane |
âtypeâ |
ENUM | IMMUTABLE |
@@ -2639,6 +2709,21 @@ void intel_crt_init(struct drm_device *dev)
TBD |
+ Virtual GPU |
+ âsuggested Xâ |
+ RANGE |
+ Min=0, Max=0xffffffff |
+ Connector |
+ property to suggest an X offset for a connector |
+
+
+ âsuggested Yâ |
+ RANGE |
+ Min=0, Max=0xffffffff |
+ Connector |
+ property to suggest an Y offset for a connector |
+
+
Optional |
âscaling modeâ |
ENUM |
@@ -3386,6 +3471,13 @@ void (*disable_vblank) (struct drm_device *dev, int crtc);
by scheduling a timer. The delay is accessible through the vblankoffdelay
module parameter or the drm_vblank_offdelay global
variable and expressed in milliseconds. Its default value is 5000 ms.
+ Zero means never disable, and a negative value means disable immediately.
+ Drivers may override the behaviour by setting the
+ drm_device
+ vblank_disable_immediate flag, which when set
+ causes vblank interrupts to be disabled immediately regardless of the
+ drm_vblank_offdelay value. The flag should only be set if there's a
+ properly working hardware vblank counter present.
When a vertical blanking interrupt occurs drivers only need to call the
@@ -3400,6 +3492,7 @@ void (*disable_vblank) (struct drm_device *dev, int crtc);
Vertical Blanking and Interrupt Handling Functions Reference
!Edrivers/gpu/drm/drm_irq.c
+!Finclude/drm/drmP.h drm_crtc_vblank_waitqueue
@@ -3780,6 +3873,26 @@ int num_ioctls;
blocks. This excludes a set of SoC platforms with an SGX rendering unit,
those have basic support through the gma500 drm driver.
+
+ Core Driver Infrastructure
+
+ This section covers core driver infrastructure used by both the display
+ and the GEM parts of the driver.
+
+
+ Runtime Power Management
+!Pdrivers/gpu/drm/i915/intel_runtime_pm.c runtime pm
+!Idrivers/gpu/drm/i915/intel_runtime_pm.c
+
+
+ Interrupt Handling
+!Pdrivers/gpu/drm/i915/i915_irq.c interrupt handling
+!Fdrivers/gpu/drm/i915/i915_irq.c intel_irq_init intel_irq_init_hw intel_hpd_init
+!Fdrivers/gpu/drm/i915/i915_irq.c intel_irq_fini
+!Fdrivers/gpu/drm/i915/i915_irq.c intel_runtime_pm_disable_interrupts
+!Fdrivers/gpu/drm/i915/i915_irq.c intel_runtime_pm_enable_interrupts
+
+
Display Hardware Handling
@@ -3796,6 +3909,18 @@ int num_ioctls;
configuration change.
+
+ Frontbuffer Tracking
+!Pdrivers/gpu/drm/i915/intel_frontbuffer.c frontbuffer tracking
+!Idrivers/gpu/drm/i915/intel_frontbuffer.c
+!Fdrivers/gpu/drm/i915/intel_drv.h intel_frontbuffer_flip
+!Fdrivers/gpu/drm/i915/i915_gem.c i915_gem_track_fb
+
+
+ Display FIFO Underrun Reporting
+!Pdrivers/gpu/drm/i915/intel_fifo_underrun.c fifo underrun handling
+!Idrivers/gpu/drm/i915/intel_fifo_underrun.c
+
Plane Configuration
@@ -3815,6 +3940,16 @@ int num_ioctls;
probing, so those sections fully apply.
+
+ High Definition Audio
+!Pdrivers/gpu/drm/i915/intel_audio.c High Definition Audio over HDMI and Display Port
+!Idrivers/gpu/drm/i915/intel_audio.c
+
+
+ Panel Self Refresh PSR (PSR/SRD)
+!Pdrivers/gpu/drm/i915/intel_psr.c Panel Self Refresh (PSR/SRD)
+!Idrivers/gpu/drm/i915/intel_psr.c
+
DPIO
!Pdrivers/gpu/drm/i915/i915_reg.h DPIO
@@ -3918,7 +4053,34 @@ int num_ioctls;
!Pdrivers/gpu/drm/i915/i915_cmd_parser.c batch buffer command parser
!Idrivers/gpu/drm/i915/i915_cmd_parser.c
+
+ Logical Rings, Logical Ring Contexts and Execlists
+!Pdrivers/gpu/drm/i915/intel_lrc.c Logical Rings, Logical Ring Contexts and Execlists
+!Idrivers/gpu/drm/i915/intel_lrc.c
+
+
+
+ Tracing
+
+ This sections covers all things related to the tracepoints implemented in
+ the i915 driver.
+
+
+ i915_ppgtt_create and i915_ppgtt_release
+!Pdrivers/gpu/drm/i915/i915_trace.h i915_ppgtt_create and i915_ppgtt_release tracepoints
+
+
+ i915_context_create and i915_context_free
+!Pdrivers/gpu/drm/i915/i915_trace.h i915_context_create and i915_context_free tracepoints
+
+
+ switch_mm
+!Pdrivers/gpu/drm/i915/i915_trace.h switch_mm tracepoint
+
+
+
+!Cdrivers/gpu/drm/i915/i915_irq.c