Many utilities default to creating GeoTIFF files if a format is not specified. File extensions are not used to guess output format, nor are extensions generally added by GDAL if not indicated in the filename by the user.
The creation options available vary by format driver, and some simple formats have no creation options at all. A list of options supported for a format can be listed with the "--format <format>" commandline option but the web page for the format is the definitive source of information on driver creation options.
These utilities allow the coordinate system (SRS = spatial reference system) to be assigned in a variety of formats.
gdalinfo [--help-general] [-mm] [-stats] [-hist] [-nogcp] [-nomd] [-noct] [-nofl] [-checksum] [-mdd domain]* [-sd subdataset] datasetname
The gdalinfo will report all of the following (if known):
gdalinfo ~/openev/utm.tif Driver: GTiff/GeoTIFF Size is 512, 512 Coordinate System is: PROJCS["NAD27 / UTM zone 11N", GEOGCS["NAD27", DATUM["North_American_Datum_1927", SPHEROID["Clarke 1866",6378206.4,294.978698213901]], PRIMEM["Greenwich",0], UNIT["degree",0.0174532925199433]], PROJECTION["Transverse_Mercator"], PARAMETER["latitude_of_origin",0], PARAMETER["central_meridian",-117], PARAMETER["scale_factor",0.9996], PARAMETER["false_easting",500000], PARAMETER["false_northing",0], UNIT["metre",1]] Origin = (440720.000000,3751320.000000) Pixel Size = (60.000000,-60.000000) Corner Coordinates: Upper Left ( 440720.000, 3751320.000) (117d38'28.21"W, 33d54'8.47"N) Lower Left ( 440720.000, 3720600.000) (117d38'20.79"W, 33d37'31.04"N) Upper Right ( 471440.000, 3751320.000) (117d18'32.07"W, 33d54'13.08"N) Lower Right ( 471440.000, 3720600.000) (117d18'28.50"W, 33d37'35.61"N) Center ( 456080.000, 3735960.000) (117d28'27.39"W, 33d45'52.46"N) Band 1 Block=512x16 Type=Byte, ColorInterp=Gray
gdal_translate [--help-general] [-ot {Byte/Int16/UInt16/UInt32/Int32/Float32/Float64/ CInt16/CInt32/CFloat32/CFloat64}] [-strict] [-of format] [-b band] [-mask band] [-expand {gray|rgb|rgba}] [-outsize xsize[%] ysize[%]] [-unscale] [-scale [src_min src_max [dst_min dst_max]]] [-srcwin xoff yoff xsize ysize] [-projwin ulx uly lrx lry] [-a_srs srs_def] [-a_ullr ulx uly lrx lry] [-a_nodata value] [-gcp pixel line easting northing [elevation]]* [-mo "META-TAG=VALUE"]* [-q] [-sds] [-co "NAME=VALUE"]* [-stats] src_dataset dst_dataset
gdal_translate -of GTiff -co "TILED=YES" utm.tif utm_tiled.tif
Starting with GDAL 1.8.0, to create a JPEG-compressed TIFF with internal mask from a RGBA dataset :
gdal_translate rgba.tif withmask.tif -b 1 -b 2 -b 3 -mask 4 -co COMPRESS=JPEG -co PHOTOMETRIC=YCBCR --config GDAL_TIFF_INTERNAL_MASK YES
Starting with GDAL 1.8.0, to create a RGBA dataset from a RGB dataset with a mask :
gdal_translate withmask.tif rgba.tif -b 1 -b 2 -b 3 -b mask
gdaladdo [-r {nearest,average,gauss,cubic,average_mp,average_magphase,mode}] [-ro] [-clean] [--help-general] filename levels
Mode (available from GDAL 1.6.0) selects the value which appears most often of all the sampled points. average_mp is unsuitable for use. Average_magphase averages complex data in mag/phase space. Nearest and average are applicable to normal image data. Nearest applies a nearest neighbour (simple sampling) resampler, while average computes the average of all non-NODATA contributing pixels. Cubic resampling (available from GDAL 1.7.0) applies a 4x4 approximate cubic convolution kernel. Gauss resampling (available from GDAL 1.6.0) applies a Gaussian kernel before computing the overview, which can lead to better results than simple averaging in e.g case of sharp edges with high contrast or noisy patterns. The advised level values should be 2, 4, 8, ... so that a 3x3 resampling Gaussian kernel is selected.
gdaladdo will honour properly NODATA_VALUES tuples (special dataset metadata) so that only a given RGB triplet (in case of a RGB image) will be considered as the nodata value and not each value of the triplet independantly per band.
Selecting a level value like 2 causes an overview level that is 1/2 the resolution (in each dimension) of the base layer to be computed. If the file has existing overview levels at a level selected, those levels will be recomputed and rewritten in place.
Some format drivers do not support overviews at all. Many format drivers store overviews in a secondary file with the extension .ovr that is actually in TIFF format. By default, the GeoTIFF driver stores overviews internally to the file operated on (if it is writable), unless the -ro flag is specified.
Most drivers also support an alternate overview format using Erdas Imagine format. To trigger this use the USE_RRD=YES configuration option. This will place the overviews in an associated .aux file suitable for direct use with Imagine or ArcGIS as well as GDAL applications. (eg --config USE_RRD YES)
For JPEG compressed external overviews, the JPEG quality can be set with "--config JPEG_QUALITY_OVERVIEW value" (GDAL 1.7.0 or later).
For LZW or DEFLATE compressed external overviews, the predictor value can be set with "--config PREDICTOR_OVERVIEW 1|2|3" (GDAL 1.8.0 or later).
To produce the smallest possible JPEG-In-TIFF overviews, you should use :
--config COMPRESS_OVERVIEW JPEG --config PHOTOMETRIC_OVERVIEW YCBCR --config INTERLEAVE_OVERVIEW PIXEL
Starting with GDAL 1.7.0, external overviews can be created in the BigTIFF format by using the BIGTIFF_OVERVIEW configuration option : --config BIGTIFF_OVERVIEW {IF_NEEDED|IF_SAFER|YES|NO}. The default value is IF_NEEDED. The behaviour of this option is exactly the same as the BIGTIFF creation option documented in the GeoTIFF driver documentation.
See the documentation of the GeoTIFF driver for further explanations on all those options.
Create overviews, embedded in the supplied TIFF file:
gdaladdo -r average abc.tif 2 4 8 16
Create an external compressed GeoTIFF overview file from the ERDAS .IMG file:
gdaladdo -ro --config COMPRESS_OVERVIEW DEFLATE erdas.img 2 4 8 16
Create an external JPEG-compressed GeoTIFF overview file from a 3-band RGB dataset (if the dataset is a writable GeoTIFF, you also need to add the -ro option to force the generation of external overview):
gdaladdo --config COMPRESS_OVERVIEW JPEG --config PHOTOMETRIC_OVERVIEW YCBCR --config INTERLEAVE_OVERVIEW PIXEL rgb_dataset.ext 2 4 8 16
Create an Erdas Imagine format overviews for the indicated JPEG file:
gdaladdo --config USE_RRD YES airphoto.jpg 3 9 27 81
gdalwarp [--help-general] [--formats] [-s_srs srs_def] [-t_srs srs_def] [-to "NAME=VALUE"] [-order n | -tps | -rpc | -geoloc] [-et err_threshold] [-refine_gcps tolerance [minimum_gcps]] [-te xmin ymin xmax ymax] [-tr xres yres] [-tap] [-ts width height] [-wo "NAME=VALUE"] [-ot Byte/Int16/...] [-wt Byte/Int16] [-srcnodata "value [value...]"] [-dstnodata "value [value...]"] -dstalpha [-r resampling_method] [-wm memory_in_mb] [-multi] [-q] [-cutline datasource] [-cl layer] [-cwhere expression] [-csql statement] [-cblend dist_in_pixels] [-crop_to_cutline] [-of format] [-co "NAME=VALUE"]* [-overwrite] srcfile* dstfile
None
to ignore intrinsic nodata settings on the source
dataset.Mosaicing into an existing output file is supported if the output file already exists. The spatial extent of the existing file will not be modified to accomodate new data, so you may have to remove it in that case, or use the -overwrite option.
Polygon cutlines may be used as a mask to restrict the area of the destination file that may be updated, including blending. If the OGR layer containing the cutline features has no explicit SRS, the cutline features must be in the georeferenced units of the destination file. When outputing to a not yet existing target dataset, its extent will be the one of the original raster unless -te or -crop_to_cutline are specified.
gdalwarp -t_srs '+proj=utm +zone=11 +datum=WGS84' raw_spot.tif utm11.tif
For instance, the second channel of an ASTER image stored in HDF with control points mapping the corners to lat/long could be warped to a UTM projection with a command like this:
gdalwarp HDF4_SDS:ASTER_L1B:"pg-PR1B0000-2002031402_100_001":2 pg-PR1B0000-2002031402_100_001_2.tif
gdaltindex [-tileindex field_name] [-write_absolute_path] [-skip_different_projection] index_file [gdal_file]*
gdaltindex doq_index.shp doq/*.tif
gdalbuildvrt [-tileindex field_name] [-resolution {highest|lowest|average|user}] [-tr xres yres] [-tap] [-separate] [-allow_projection_difference] [-q] [-te xmin ymin xmax ymax] [-addalpha] [-hidenodata] [-srcnodata "value [value...]"] [-vrtnodata "value [value...]"] [-input_file_list my_liste.txt] [-overwrite] output.vrt [gdalfile]*
With -separate, each files goes into a separate stacked band in the VRT band. Otherwise, the files are considered as tiles of a larger mosaic and the VRT file has as many bands as one of the input files.
If one GDAL dataset is made of several subdatasets and has 0 raster bands, all the subdatasets will be added to the VRT rather than the dataset itself.
gdalbuildvrt does some amount of checks to assure that all files that will be put in the resulting VRT have similar characteristics : number of bands, projection, color interpretation... If not, files that do not match the common characteristics will be skipped. (This is only true in the default mode, and not when using the -separate option)
If there is some amount of spatial overlapping between files, the order may depend on the order they are inserted in the VRT file, but this behaviour should not be relied on.
This utility is somehow equivalent to the gdal_vrtmerge.py utility and is build by default in GDAL 1.6.1.
None
to
ignore intrinsic nodata settings on the source datasets.
None
to ignore intrinsic nodata settings on the source datasets.
gdalbuildvrt doq_index.vrt doq/*.tif gdalbuildvrt -input_file_list my_liste.txt doq_index.vrt gdalbuildvrt -separate rgb.vrt red.tif green.tif blue.tif gdalbuildvrt -hidenodata -vrtnodata "0 0 255" doq_index.vrt doq/*.tif
Usage: gdal_contour [-b <band>] [-a <attribute_name>] [-3d] [-inodata] [-snodata n] [-f <formatname>] [-i <interval>] [-off <offset>] [-fl <level> <level>...] [-nln <outlayername>] <src_filename> <dst_filename>
Starting from version 1.7 the contour line-strings will be oriented consistently. The high side will be on the right, i.e. a line string goes clockwise around a top.
gdal_contour -a elev dem.tif contour.shp -i 10.0
- To generate a shaded relief map from any GDAL-supported elevation raster :
gdaldem hillshade input_dem output_hillshade [-z ZFactor (default=1)] [-s scale* (default=1)]" [-az Azimuth (default=315)] [-alt Altitude (default=45)] [-alg ZevenbergenThorne] [-compute_edges] [-b Band (default=1)] [-of format] [-co "NAME=VALUE"]* [-q]- To generate a slope map from any GDAL-supported elevation raster :
gdaldem slope input_dem output_slope_map" [-p use percent slope (default=degrees)] [-s scale* (default=1)] [-alg ZevenbergenThorne] [-compute_edges] [-b Band (default=1)] [-of format] [-co "NAME=VALUE"]* [-q]- To generate an aspect map from any GDAL-supported elevation raster Outputs a 32-bit float raster with pixel values from 0-360 indicating azimuth :
gdaldem aspect input_dem output_aspect_map" [-trigonometric] [-zero_for_flat] [-alg ZevenbergenThorne] [-compute_edges] [-b Band (default=1)] [-of format] [-co "NAME=VALUE"]* [-q]- To generate a color relief map from any GDAL-supported elevation raster
gdaldem color-relief input_dem color_text_file output_color_relief_map [-alpha] [-exact_color_entry | -nearest_color_entry] [-b Band (default=1)] [-of format] [-co "NAME=VALUE"]* [-q]where color_text_file contains lines of the format "elevation_value red green blue" - To generate a Terrain Ruggedness Index (TRI) map from any GDAL-supported elevation raster:
gdaldem TRI input_dem output_TRI_map [-compute_edges] [-b Band (default=1)] [-of format] [-q]- To generate a Topographic Position Index (TPI) map from any GDAL-supported elevation raster:
gdaldem TPI input_dem output_TPI_map [-compute_edges] [-b Band (default=1)] [-of format] [-q]- To generate a roughness map from any GDAL-supported elevation raster:
gdaldem roughness input_dem output_roughness_map [-compute_edges] [-b Band (default=1)] [-of format] [-q]Notes : Scale is the ratio of vertical units to horizontal for Feet:Latlong use scale=370400, for Meters:LatLong use scale=111120) This utility has 7 different modes :
The following general options are available :
For all algorithms, except color-relief, a nodata value in the target dataset will be emitted if at least one pixel set to the nodata value is found in the 3x3 window centered around each source pixel. The consequence is that there will be a 1-pixel border around each image set with nodata value. From GDAL 1.8.0, if -compute_edges is specified, gdaldem will compute values at image edges or if a nodata value is found in the 3x3 window, by interpolating missing values.
The value 0 is used as the output nodata value.
The following specific options are available :
The value -9999 is used as the output nodata value.
The following specific options are available :
The following specifics options are available :
By using those 2 options, the aspect returned by gdaldem aspect should be identical to the one of GRASS r.slope.aspect. Otherwise, it's identical to the one of Matthew Perry's aspect.cpp utility.
The following specifics options are available :
The color-relief mode is the only mode that supports VRT as output format. In that case, it will translate the color configuration file into appropriate <LUT> elements. Note that elevations specified as percentage will be translated as absolute values, which must be taken into account when the statistics of the source raster differ from the one that was used when building the VRT.
The text-based color configuration file generally contains 4 columns per line : the elevation value and the corresponding Red, Green, Blue component (between 0 and 255). The elevation value can be any floating point value, or the nv keyword for the nodata value.. The elevation can also be expressed as a percentage : 0% being the minimum value found in the raster, 100% the maximum value.
An extra column can be optionnaly added for the alpha component. If it is not specified, full opacity (255) is assumed.
Various field separators are accepted : comma, tabulation, spaces, ':'.
Common colors used by GRASS can also be specified by using their name, instead of the RGB triplet. The supported list is : white, black, red, green, blue, yellow, magenta, cyan, aqua, grey/gray, orange, brown, purple/violet and indigo.
Since GDAL 1.8.0, GMT .cpt palette files are also supported (COLOR_MODEL = RGB only).
Note: the syntax of the color configuration file is derived from the one supported by GRASS r.colors utility. ESRI HDR color table files (.clr) also match that syntax. The alpha component and the support of tablulations and commma as separators are GDAL specific extensions.
For example :
3500 white 2500 235:220:175 50% 190 185 135 700 240 250 150 0 50 180 50 nv 0 0 0 0
The value -9999 is used as the output nodata value.
There are no specific options.
The value -9999 is used as the output nodata value.
There are no specific options.
The value -9999 is used as the output nodata value.
There are no specific options.
Derived from code by Michael Shapiro, Olga Waupotitsch, Marjorie Larson, Jim Westervelt : U.S. Army CERL, 1993. GRASS 4.1 Reference Manual. U.S. Army Corps of Engineers, Construction Engineering Research Laboratories, Champaign, Illinois, 1-425.
http://grass.osgeo.org/grass64/manuals/html64_user/r.slope.aspect.html
http://grass.osgeo.org/grass64/manuals/html64_user/r.shaded.relief.html
http://grass.osgeo.org/grass64/manuals/html64_user/r.colors.html
rgb2pct.py [-n colors | -pct palette_file] [-of format] source_file dest_file
NOTE: rgb2pct.py is a Python script, and will only work if GDAL was built with Python support.
% rgb2pct.py -pct palette.vrt rgb.tif pseudo-colored.tif % more < palette.vrt <VRTDataset rasterXSize="226" rasterYSize="271"> <VRTRasterBand dataType="Byte" band="1"> <ColorInterp>Palette</ColorInterp> <ColorTable> <Entry c1="238" c2="238" c3="238" c4="255"/> <Entry c1="237" c2="237" c3="237" c4="255"/> <Entry c1="236" c2="236" c3="236" c4="255"/> <Entry c1="229" c2="229" c3="229" c4="255"/> </ColorTable> </VRTRasterBand> </VRTDataset>
pct2rgb.py [-of format] [-b band] [-rgba] source_file dest_file
NOTE: pct2rgb.py is a Python script, and will only work if GDAL was built with Python support.
The new '-expand rgb|rgba' option of gdal_translate obsoletes that utility.
gdal_merge.py [-o out_filename] [-of out_format] [-co NAME=VALUE]* [-ps pixelsize_x pixelsize_y] [-tap] [-separate] [-v] [-pct] [-ul_lr ulx uly lrx lry] [-n nodata_value] [-init "value [value...]"] [-ot datatype] [-createonly] input_files
NOTE: gdal_merge.py is a Python script, and will only work if GDAL was built with Python support.
% gdal_merge.py -init 255 -o out.tif in1.tif in2.tif
Create an RGB image that shows blue in pixels with no data. The first two bands will be initialized to 0 and the third band will be initalized to 255.
% gdal_merge.py -init "0 0 255" -o out.tif in1.tif in2.tif
gdal2tiles.py [-title "Title"] [-publishurl http://yourserver/dir/] [-nogooglemaps] [-noopenlayers] [-nokml] [-googlemapskey KEY] [-forcekml] [-v] input_file [output_dir]
GDAL2Tiles creates also necessary metadata for Google Earth (KML SuperOverlay), in case the supplied map uses EPSG:4326 projection.
World files and embedded georeference is used during tile generation, but you can publish a picture without proper georeference too.
NOTE: gdal2tiles.py is a Python script that needs to be run against "new generation" Python GDAL binding.
Usage: gdal_rasterize [-b band]* [-i] [-at] [-burn value]* | [-a attribute_name] [-3d] [-l layername]* [-where expression] [-sql select_statement] [-of format] [-a_srs srs_def] [-co "NAME=VALUE"]* [-a_nodata value] [-init value]* [-te xmin ymin xmax ymax] [-tr xres yres] [-tap] [-ts width height] [-ot {Byte/Int16/UInt16/UInt32/Int32/Float32/Float64/ CInt16/CInt32/CFloat32/CFloat64}] [-q] <src_datasource> <dst_filename>
Note that the vector data must in the same coordinate system as the raster data; on the fly reprojection is not provided.
Since GDAL 1.8.0, the target GDAL file can be created by gdal_rasterize. One of -tr or -ts option must be used in that case.
gdal_rasterize -b 1 -b 2 -b 3 -burn 255 -burn 0 -burn 0 -l mask mask.shp work.tif
The following would burn all "class A" buildings into the output elevation file, pulling the top elevation from the ROOF_H attribute.
gdal_rasterize -a ROOF_H -where 'class="A"' -l footprints footprints.shp city_dem.tif
gdaltransform [--help-general] [-i] [-s_srs srs_def] [-t_srs srs_def] [-to "NAME=VALUE"] [-order n] [-tps] [-rpc] [-geoloc] [-gcp pixel line easting northing [elevation]]* [srcfile [dstfile]]
Coordinates are read as pairs (or triples) of numbers per line from standard input, transformed, and written out to standard output in the same way. All transformations offered by gdalwarp are handled, including gcp-based ones.
Note that input and output must always be in decimal form. There is currently no support for DMS input or output.
If an input image file is provided, input is in pixel/line coordinates on that image. If an output file is provided, output is in pixel/line coordinates on that image.
gdaltransform -s_srs EPSG:28992 -t_srs EPSG:31370 177502 311865
Produces the following output in meters in the "Belge 1972 / Belgian Lambert 72" projection:
244510.77404604 166154.532871342 -1046.79270555763
gdaltransform -i -rpc 06OCT20025052-P2AS-005553965230_01_P001.TIF 125.67206 39.85307 50
Produces this output measured in pixels and lines on the image:
3499.49282422381 2910.83892848414 50
nearblack [-of format] [-white | [-color c1,c2,c3...cn]*] [-near dist] [-nb non_black_pixels] [-setalpha] [-setmask] [-o outfile] [-q] [-co "NAME=VALUE"]* infile
The algorithm processes the image one scanline at a time. A scan "in" is done from either end setting pixels to black or white until at least "non_black_pixels" pixels that are more than "dist" gray levels away from black, white or custom colors have been encountered at which point the scan stops. The nearly black, white or custom color pixels are set to black or white. The algorithm also scans from top to bottom and from bottom to top to identify indentations in the top or bottom.
The processing is all done in 8bit (Bytes).
If the output file is omitted, the processed results will be written back to the input file - which must support update.
Usage:
gdal_retile.py [-v] [-co NAME=VALUE]* [-of out_format] [-ps pixelWidth pixelHeight] [-ot {Byte/Int16/UInt16/UInt32/Int32/Float32/Float64/ CInt16/CInt32/CFloat32/CFloat64}]' [ -tileIndex tileIndexName [-tileIndexField tileIndexFieldName]] [ -csv fileName [-csvDelim delimiter]] [-s_srs srs_def] [-pyramidOnly] [-r {near/bilinear/cubic/cubicspline/lanczos}] -levels numberoflevels [-useDirForEachRow] -targetDir TileDirectory input_files
This utility will retile a set of input tile(s). All the input tile(s) must be georeferenced in the same coordinate system and have a matching number of bands. Optionally pyramid levels are generated. It is possible to generate shape file(s) for the tiled output.
If your number of input tiles exhausts the command line buffer, use the general --optfile option
NOTE: gdal_retile.py is a Python script, and will only work if GDAL was built with Python support.
gdal_grid [-ot {Byte/Int16/UInt16/UInt32/Int32/Float32/Float64/ CInt16/CInt32/CFloat32/CFloat64}] [-of format] [-co "NAME=VALUE"] [-zfield field_name] [-a_srs srs_def] [-spat xmin ymin xmax ymax] [-clipsrc <xmin ymin xmax ymax>|WKT|datasource|spat_extent] [-clipsrcsql sql_statement] [-clipsrclayer layer] [-clipsrcwhere expression] [-l layername]* [-where expression] [-sql select_statement] [-txe xmin xmax] [-tye ymin ymax] [-outsize xsize ysize] [-a algorithm[:parameter1=value1]*] [-q] <src_datasource> <dst_filename>
Note, that it is essential to set search ellipse for moving average method. It is a window that will be averaged when computing grid nodes values.
All the metrics have the same set of options:
Here is a small example. Let we have a CSV file called dem.csv containing
Easting,Northing,Elevation 86943.4,891957,139.13 87124.3,892075,135.01 86962.4,892321,182.04 87077.6,891995,135.01 ...
For above data we will create dem.vrt header with the following content:
<OGRVRTDataSource> <OGRVRTLayer name="dem"> <SrcDataSource>dem.csv</SrcDataSource> <GeometryType>wkbPoint</GeometryType> <GeometryField encoding="PointFromColumns" x="Easting" y="Northing" z="Elevation"/> </OGRVRTLayer> </OGRVRTDataSource>
This description specifies so called 2.5D geometry with three coordinates X, Y and Z. Z value will be used for interpolation. Now you can use dem.vrt with all OGR programs (start with ogrinfo to test that everything works fine). The datasource will contain single layer called "dem" filled with point features constructed from values in CSV file. Using this technique you can handle CSV files with more than three columns, switch columns, etc.
If your CSV file does not contain column headers then it can be handled in the following way:
<GeometryField encoding="PointFromColumns" x="field_1" y="field_2" z="field_3"/>
Comma Separated Value description page contains details on CSV format supported by GDAL/OGR.
gdal_grid -a invdist:power=2.0:smoothing=1.0 -txe 85000 89000 -tye 894000 890000 -outsize 400 400 -of GTiff -ot Float64 -l dem dem.vrt dem.tiff
The next command does the same thing as the previos one, but reads values to interpolate from the attribute field specified with -zfield option instead of geometry record. So in this case X and Y coordinates are being taken from geometry and Z is being taken from the "Elevation" field.
gdal_grid -zfield "Elevation" -a invdist:power=2.0:smoothing=1.0 -txe 85000 89000 -tye 894000 890000 -outsize 400 400 -of GTiff -ot Float64 -l dem dem.vrt dem.tiff
gdal_proximity.py srcfile dstfile [-srcband n] [-dstband n] [-of format] [-co name=value]* [-ot Byte/Int16/Int32/Float32/etc] [-values n,n,n] [-distunits PIXEL/GEO] [-maxdist n] [-nodata n] [-fixed-buf-val n]
gdal_polygonize.py [-o name=value] [-nomask] [-mask filename] raster_file [-b band] [-q] [-f ogr_format] out_file [layer] [fieldname]
The utility will create the output vector datasource if it does not already exist, defaulting to GML format.
The utility is based on the GDALPolygonize() function which has additional details on the algorithm.
gdal_sieve.py [-q] [-st threshold] [-4] [-8] [-o name=value] srcfile [-nomask] [-mask filename] [-of format] [dstfile]
Additional details on the algorithm are available in the GDALSieveFilter() docs.
gdal_fillnodata.py [-q] [-md max_distance] [-si smooth_iterations] [-o name=value] [-b band] srcfile [-nomask] [-mask filename] [-of format] [dstfile]
Additional details on the algorithm are available in the GDALFillNodata() docs.
Usage: gdallocationinfo [--help-general] [-xml] [-lifonly] [-valonly] [-b band]* [-l_srs srs_def] [-geoloc] [-wgs84] srcfile [x y]
This utility is intended to provide a variety of information about a pixel. Currently it reports three things:
The pixel selected is requested by x/y coordinate on the commandline, or read from stdin. More than one coordinate pair can be supplied when reading coordinatesis from stdin. By default pixel/line coordinates are expected. However with use of the -geoloc, -wgs84, or -l_srs switches it is possible to specify the location in other coordinate systems.
The default report is in a human readable text format. It is possible to instead request xml output with the -xml switch.
For scripting purposes, the -valonly and -lifonly switches are provided to restrict output to the actual pixel values, or the LocationInfo files identified for the pixel.
It is anticipated that additional reporting capabilities will be added to gdallocationinfo in the future.
$ gdallocationinfo utm.tif 256 256 Report: Location: (256P,256L) Band 1: Value: 115
Query a VRT file providing the location in WGS84, and getting the result in xml.
$ gdallocationinfo -xml -wgs84 utm.vrt -117.5 33.75 <Report pixel="217" line="282"> <BandReport band="1"> <LocationInfo> <File>utm.tif</File> </LocationInfo> <Value>16</Value> </BandReport> </Report>
gdal-config [OPTIONS] Options: [--prefix[=DIR]] [--libs] [--cflags] [--version] [--ogr-enabled] [--formats]
Long Format Name | Code | Creation | Georeferencing | Maximum file size1 | Compiled by
default |
---|---|---|---|---|---|
Arc/Info ASCII Grid | AAIGrid | Yes | Yes | 2GB | Yes |
ACE2 | ACE2 | No | Yes | -- | Yes |
ADRG/ARC Digitilized Raster Graphics (.gen/.thf) | ADRG | Yes | Yes | -- | Yes |
Arc/Info Binary Grid (.adf) | AIG | No | Yes | -- | Yes |
AIRSAR Polarimetric | AIRSAR | No | No | -- | Yes |
Magellan BLX Topo (.blx, .xlb) | BLX | Yes | Yes | -- | Yes |
Bathymetry Attributed Grid (.bag) | BAG | No | Yes | 2GiB | No, needs libhdf5 |
Microsoft Windows Device Independent Bitmap (.bmp) | BMP | Yes | Yes | 4GiB | Yes |
BSB Nautical Chart Format (.kap) | BSB | No | Yes | -- | Yes, can be disabled |
VTP Binary Terrain Format (.bt) | BT | Yes | Yes | -- | Yes |
CEOS (Spot for instance) | CEOS | No | No | -- | Yes |
DRDC COASP SAR Processor Raster | COASP | No | No | -- | Yes |
TerraSAR-X Complex SAR Data Product | COSAR | No | No | -- | Yes |
Convair PolGASP data | CPG | No | Yes | -- | Yes |
USGS LULC Composite Theme Grid | CTG | No | Yes | -- | Yes |
Spot DIMAP (metadata.dim) | DIMAP | No | Yes | -- | Yes |
ELAS DIPEx | DIPEx | No | Yes | -- | Yes |
DODS / OPeNDAP | DODS | No | Yes | -- | No, needs libdap |
First Generation USGS DOQ (.doq) | DOQ1 | No | Yes | -- | Yes |
New Labelled USGS DOQ (.doq) | DOQ2 | No | Yes | -- | Yes |
Military Elevation Data (.dt0, .dt1, .dt2) | DTED | Yes | Yes | -- | Yes |
Arc/Info Export E00 GRID | E00GRID | No | Yes | -- | Yes |
ECRG Table Of Contents (TOC.xml) | ECRGTOC | No | Yes | -- | Yes |
ERDAS Compressed Wavelets (.ecw) | ECW | Yes | Yes | No, needs ECW SDK | |
ESRI .hdr Labelled | EHdr | Yes | Yes | No limits | Yes |
Erdas Imagine Raw | EIR | No | Yes | -- | Yes |
NASA ELAS | ELAS | Yes | Yes | -- | Yes |
ENVI .hdr Labelled Raster | ENVI | Yes | Yes | No limits | Yes |
Epsilon - Wavelet compressed images | EPSILON | Yes | No | -- | No, needs EPSILON library |
ERMapper (.ers) | ERS | Yes | Yes | Yes | |
Envisat Image Product (.n1) | ESAT | No | No | -- | Yes |
EOSAT FAST Format | FAST | No | Yes | -- | Yes |
FIT | FIT | Yes | No | -- | Yes |
FITS (.fits) | FITS | Yes | No | -- | No, needs libcfitsio |
Fuji BAS Scanner Image | FujiBAS | No | No | -- | Yes |
Generic Binary (.hdr Labelled) | GENBIN | No | No | -- | Yes |
Oracle Spatial GeoRaster | GEORASTER | Yes | Yes | -- | No, needs Oracle client libraries |
GSat File Format | GFF | No | No | -- | Yes |
Graphics Interchange Format (.gif) | GIF | Yes | No | 2GB | Yes (internal GIF library provided) |
WMO GRIB1/GRIB2 (.grb) | GRIB | No | Yes | 2GB | Yes, can be disabled |
GMT Compatible netCDF | GMT | Yes | Yes | 2GB | No, needs libnetcdf |
GRASS Rasters | GRASS | No | Yes | -- | No, needs libgrass |
GRASS ASCII Grid | GRASSASCIIGrid | No | Yes | -- | Yes |
Golden Software ASCII Grid | GSAG | Yes | No | -- | Yes |
Golden Software Binary Grid | GSBG | Yes | No | 4GiB (32767x32767 of 4 bytes each + 56 byte header) | Yes |
Golden Software Surfer 7 Binary Grid | GS7BG | No | No | 4GiB | Yes |
GSC Geogrid | GSC | Yes | No | -- | Yes |
TIFF / BigTIFF / GeoTIFF (.tif) | GTiff | Yes | Yes | 4GiB for classical TIFF / No limits for BigTIFF | Yes (internal libtiff and libgeotiff provided) |
NOAA .gtx vertical datum shift | GTX | Yes | Yes | Yes | |
GXF - Grid eXchange File | GXF | No | Yes | 4GiB | Yes |
Hierarchical Data Format Release 4 (HDF4) | HDF4 | Yes | Yes | 2GiB | No, needs libdf |
Hierarchical Data Format Release 5 (HDF5) | HDF5 | No | Yes | 2GiB | No, needs libhdf5 |
HF2/HFZ heightfield raster | HF2 | Yes | Yes | - | Yes |
Erdas Imagine (.img) | HFA | Yes | Yes | No limits2 | Yes |
Image Display and Analysis (WinDisp) | IDA | Yes | Yes | 2GB | Yes |
ILWIS Raster Map (.mpr,.mpl) | ILWIS | Yes | Yes | -- | Yes |
Intergraph Raster | INGR | Yes | Yes | 2GiB | Yes |
USGS Astrogeology ISIS cube (Version 2) | ISIS2 | Yes | Yes | -- | Yes |
USGS Astrogeology ISIS cube (Version 3) | ISIS3 | No | Yes | -- | Yes |
JAXA PALSAR Product Reader (Level 1.1/1.5) | JAXAPALSAR | No | No | -- | Yes |
Japanese DEM (.mem) | JDEM | No | Yes | -- | Yes |
JPEG JFIF (.jpg) | JPEG | Yes | Yes | 4GiB (max dimentions 65500x65500) | Yes (internal libjpeg provided) |
JPEG-LS | JPEGLS | Yes | No | -- | No, needs CharLS library |
JPEG2000 (.jp2, .j2k) | JPEG2000 | Yes | Yes | 2GiB | No, needs libjasper |
JPEG2000 (.jp2, .j2k) | JP2ECW | Yes | Yes | 500MB | No, needs ECW SDK |
JPEG2000 (.jp2, .j2k) | JP2KAK | Yes | Yes | No limits | No, needs Kakadu library |
JPEG2000 (.jp2, .j2k) | JP2MrSID | Yes | Yes | No, needs MrSID SDK | |
JPEG2000 (.jp2, .j2k) | JP2OpenJPEG | Yes | Yes | No, needs OpenJPEG library (v2) | |
JPIP (based on Kakadu) | JPIPKAK | No | Yes | No, needs Kakadu library | |
KMLSUPEROVERLAY | KMLSUPEROVERLAY | Yes | Yes | Yes | |
NOAA Polar Orbiter Level 1b Data Set (AVHRR) | L1B | No | Yes | -- | Yes |
Erdas 7.x .LAN and .GIS | LAN | No | Yes | 2GB | Yes |
FARSITE v.4 LCP Format | LCP | No | Yes | No limits | Yes |
Daylon Leveller Heightfield | Leveller | No | Yes | 2GB | Yes |
NADCON .los/.las Datum Grid Shift | LOSLAS | No | Yes | Yes | |
In Memory Raster | MEM | Yes | Yes | Yes | |
Vexcel MFF | MFF | Yes | Yes | No limits | Yes |
Vexcel MFF2 | MFF2 (HKV) | Yes | Yes | No limits | Yes |
MG4 Encoded Lidar | MG4Lidar | No | Yes | -- | No, needs LIDAR SDK |
Multi-resolution Seamless Image Database | MrSID | No | Yes | -- | No, needs MrSID SDK |
Meteosat Second Generation | MSG | No | Yes | No, needs msg library | |
EUMETSAT Archive native (.nat) | MSGN | No | Yes | Yes | |
NLAPS Data Format | NDF | No | Yes | No limits | Yes |
NITF (.ntf, .nsf, .gn?, .hr?, .ja?, .jg?, .jn?, .lf?, .on?, .tl?, .tp?, etc.) | NITF | Yes | Yes | 10GB | Yes |
NetCDF | netCDF | Yes | Yes | 2GB | No, needs libnetcdf |
NTv2 Datum Grid Shift | NTv2 | Yes | Yes | Yes | |
Northwood/VerticalMapper Classified Grid Format .grc/.tab | NWT_GRC | No | Yes | -- | Yes |
Northwood/VerticalMapper Numeric Grid Format .grd/.tab | NWT_GRD | No | Yes | -- | Yes |
OGDI Bridge | OGDI | No | Yes | -- | No, needs OGDI library |
OZI OZF2/OZFX3 | OZI | No | Yes | -- | No |
PCI .aux Labelled | PAux | Yes | No | No limits | Yes |
PCI Geomatics Database File | PCIDSK | Yes | Yes | No limits | Yes |
PCRaster | PCRaster | Yes | Yes | Yes (internal libcsf provided) | |
Geospatial PDF | No | Yes | -- | No, needs libpoppler or libpodofo | |
NASA Planetary Data System | PDS | No | Yes | -- | Yes |
Portable Network Graphics (.png) | PNG | Yes | No | Yes (internal libpng provided) | |
PostGIS Raster (previously WKTRaster) | PostGISRaster | No | Yes | -- | No, needs PostgreSQL library |
Netpbm (.ppm,.pgm) | PNM | Yes | No | No limits | Yes |
R Object Data Store | R | Yes | No | -- | Yes |
Rasdaman | RASDAMAN | No | No | No limits | No (needs raslib) |
Rasterlite - Rasters in SQLite DB | Rasterlite | Yes | Yes | -- | No (needs OGR SQLite driver) |
Swedish Grid RIK (.rik) | RIK | No | Yes | 4GB | Yes (internal zlib is used if necessary) |
Raster Matrix Format (*.rsw, .mtw) | RMF | Yes | Yes | 4GB | Yes |
Raster Product Format/RPF (CADRG, CIB) | RPFTOC | No | Yes | -- | Yes |
RadarSat2 XML (product.xml) | RS2 | No | Yes | 4GB | Yes |
Idrisi Raster | RST | Yes | Yes | No limits | Yes |
SAGA GIS Binary format | SAGA | Yes | Yes | -- | Yes |
SAR CEOS | SAR_CEOS | No | Yes | -- | Yes |
ArcSDE Raster | SDE | No | Yes | -- | No, needs ESRI SDE |
USGS SDTS DEM (*CATD.DDF) | SDTS | No | Yes | -- | Yes |
SGI Image Format | SGI | Yes | Yes | -- | Yes |
Snow Data Assimilation System | SNODAS | No | Yes | -- | Yes |
Standard Raster Product (ASRP/USRP) | SRP | No | Yes | 2GB | Yes |
SRTM HGT Format | SRTMHGT | Yes | Yes | -- | Yes |
Terragen Heightfield (.ter) | TERRAGEN | Yes | No | -- | Yes |
EarthWatch/DigitalGlobe .TIL | TIL | No | No | -- | Yes |
TerraSAR-X Product | TSX | Yes | No | -- | Yes |
USGS ASCII DEM / CDED (.dem) | USGSDEM | Yes | Yes | -- | Yes |
GDAL Virtual (.vrt) | VRT | Yes | Yes | -- | Yes |
OGC Web Coverage Service | WCS | No | Yes | -- | No, needs libcurl |
WEBP | WEBP | Yes | No | -- | No, needs libwebp |
OGC Web Map Service | WMS | No | Yes | -- | No, needs libcurl |
X11 Pixmap (.xpm) | XPM | Yes | No | Yes | |
ASCII Gridded XYZ | XYZ | Yes | Yes | -- | Yes |
ZMap Plus Grid | ZMap | Yes | Yes | Yes |
1Maximum file size is not only determined by the file format itself, but operating system/file system capabilities as well. Look here for details.
2ERDAS Imagine has different file format for large files, where 32-bit pointers cannot be used. Look for details here.
$Id: formats_list.html 22861 2011-08-04 20:13:27Z rouault $
AIRSAR format contains a polarimetric image in compressed stokes matrix form. Internally GDAL decompresses the data into a stokes matrix, and then converts that form into a covariance matrix. The returned six bands are the six values needed to define the 3x3 Hermitian covariance matrix. The convention used to represent the covariance matrix in terms of the scattering matrix elements HH, HV (=VH), and VV is indicated below. Note that the non-diagonal elements of the matrix are complex values, while the diagonal values are real (though represented as complex bands).
The AIRSAR product format includes (potentially) several headers of information. This information is captured and represented as metadata on the file as a whole. Information items from the main header are prefixed with "MH_", items from the parameter header are prefixed with "PH_" and information from the calibration header are prefixed with "CH_". The metadata item names are derived automatically from the names of the fields within the header itself.
No effort is made to read files associated with the AIRSAR product such as mission_l.mocomp, mission_meta.airsar or mission_meta.podaac.
See Also:
BAG files have two or three image bands representing Elevation (band 1), Uncertainty (band 2) and Nominal Elevation (band 3) values for each cell in a raster grid area.
The geotransform and coordinate system is extracted from the internal XML metadata provided with the dataset. However, some products may have unsupported coordinate system formats.
The full XML metadata is available in the "xml:BAG" metadata domain.
Nodata, minimum and maximum values for each band are also provided.
BLX is the format for storing topographic data in Magellan GPS units. This driver supports both reading and writing. In addition the 4 overview levels inherent in the BLX format can be used with the driver.
The BLX format is tile based, for the moment the tile size is fixed to 128x128 size. Furthermore the dimensions must be a multiple of the tile size.
The data type is fixed to Int16 and the value for undefined values is fixed to -32768. In the BLX format undefined values are only really supported on tile level. For undefined pixels in non-empty tiles see the FILLUNDEF/FILLUNDEFVAL options.
The BLX projection is fixed to WGS84 and georeferencing from BLX is supported in the form of one tiepoint and pixelsize.
This driver supports reading almost any type of the BMP files and could write ones which should be supported on any Windows system. Only single- or three- band files could be saved in BMP file. Input values will be resampled to 8 bit.
If an ESRI world file exists with the .bpw, .bmpw or .wld extension, it will be read and used to establish the geotransform for the image.
This driver provides the capability to read TerraSAR-X complex data. While most users will receive products in GeoTIFF format (representing detected radiation reflected from the targets, or geocoded data), ScanSAR products will be distributed in COSAR format.
Essentially, COSAR is an annotated binary matrix, with each sample held in 4 bytes (16 bytes real, 16 bytes imaginary) stored with the most significant byte first (Big Endian). Within a COSAR container there are one or more "bursts" which represent individual ScanSAR bursts. Note that if a Stripmap or Spotlight product is held in a COSAR container it is stored in a single burst.
Support for ScanSAR data is currently under way, due to the difficulties in fitting the ScanSAR "burst" identifiers into the GDAL model.
See Also:For instance, if the url http://maps.gdal.org/daac-bin/nph-hdf/3B42.HDF.dds returns a DDS definition like this:
Dataset { Structure { Structure { Float64 percipitate[scan = 5][longitude = 360][latitude = 80]; Float64 relError[scan = 5][longitude = 360][latitude = 80]; } PlanetaryGrid; } DATA_GRANULE; } 3B42.HDF;
then the percipitate grid can be accessed using the following GDAL dataset name:
http://maps.gdal.org/daac-bin/nph-hdf/3B42.HDF?DATA_GRANULE.PlanetaryGrid.percipitate[0][x][y]
The full path to the grid or array to be accessed needs to be specified (not counting the outer Dataset name). GDAL needs to know which indices of the array to treat as x (longitude or easting) and y (latitude or northing). Any other dimensions need to be restricted to a single value.
In cases of data servers with only 2D arrays and grids as immediate children of the Dataset it may not be necessary to name the grid or array variable.
In cases where there are a number of 2D arrays or grids at the dataset level, they may be each automatically treated as seperate bands.
Attributes { GLOBAL { Float64 Northernmost_Northing 71.1722; Float64 Southernmost_Northing 4.8278; Float64 Easternmost_Easting -27.8897; Float64 Westernmost_Easting -112.11; Float64 GeoTransform "71.1722 0.001 0.0 -112.11 0.0 -0.001"; String spatial_ref "GEOGCS[\"WGS84\",DATUM[\"WGS_1984\",SPHEROID[\"WGS84\",6378137,298.257223563]], PRIMEM[\"Greenwich\",0],UNIT[\"degree\",0.0174532925199433]]";0 Metadata { String TIFFTAG_XRESOLUTION "400"; String TIFFTAG_YRESOLUTION "400"; String TIFFTAG_RESOLUTIONUNIT "2 (pixels/inch)"; } } band_1 { String Description "..."; String } }
It will have the following subitems:
It will have the following subitems:
Colormap { Color_0 { Byte red 0; Byte green 0; Byte blue 0; Byte alpha 255; } Color_1 { Byte red 255; Byte green 255; Byte blue 255; Byte alpha 255; } ... }
Support for the ECW driver in GDAL is optional, and requires linking in external ECW SDK libraries provided by ERDAS.
In addition to ECW files, this driver also supports access to network image services using the "ECWP" protocol. Use the full ecwp:// url of the service as the dataset name. When built with SDK 4.1 or newer it is also possible to take advantage of RFC 24 style asynchronous access to ECWP services.
Starting with GDAL 1.9.0, XMP metadata can be extracted from JPEG2000 files, and will be stored as XML raw content in the xml:XMP metadata domain.
For those still using the ECW 3.3 SDK, images less than 500MB may be compressed for free, while larger images require licensing from ERDAS. See the licensing agreement and the LARGE_OK option.
Files to be compressed into ECW format must also be at least 128x128. ECW currently only supports 8 bits per channel.
When writing coordinate system information to ECW files, many less common coordinate systems are not mapped properly. If you know the ECW name for the coordinate system you can force it to be set at creation time with the PROJ and DATUM creation options.
Creation Options:
ELAS is an old remote sensing system still used for a variety of research projects within NASA. The ELAS format support can be found in gdal/frmts/elas.
See Also:
Starting with GDAL 1.7.0, GDAL can read and write wavelet-compressed images through the Epsilon library. Starting with GDAL 1.9.0, epsilon 0.9.1 is required.
The driver rely on the Open Source EPSILON library (dual LGPL/GPL licence v3). In its current state, the driver will only be able to read images with regular internal tiling.
The EPSILON driver only supports 1 band (grayscale) and 3 bands (RGB) images
This is mainly intented to be used by the Rasterlite driver.
TARGET Target size reduction as a percentage of the original (0-100). Defaults to 96
FILTER. See EPSILON documentation or 'gdalinfo --format EPSILON' for full list of filter IDs. Defaults to 'daub97lift'
BLOCKXSIZE=n: Sets tile width, defaults to 256. Power of 2 between 32 and 1024
BLOCKYSIZE=n: Sets tile height, defaults to 256. Power of 2 between 32 and 1024
MODE=[NORMAL/OTLPF] : OTLPF is some kind of hack to reduce boundary artefacts when image is broken into several tiles. Due to mathematical constrains this method can be applied to biorthogonal filters only. Defaults to OTLPF
RGB_RESAMPLE=[YES/NO] : Whether RGB buffer must be resampled to 4:2:0. Defaults to YES
See Also:
Datasets in FAST format represented by several files: one or more administrative headers and one or more files with actual image data in raw format. Administrative files contains different information about scene parameters including filenames of images. You can read files with administrative headers with any text viewer/editor, it is just plain ASCII text.
This driver wants administrative file for input. Filenames of images will be extracted and data will be imported, every file will be interpreted as band.
From the `` Level 1 Product Output Files Data Format Control Book'':
The file naming convention for the FAST-L7A product files is
L7fppprrr_rrrYYYYMMDD_AAA.FST
where
L7 = Landsat 7 mission
f = ETM+ format (1 or 2) (data not pertaining to a specific format
defaults to 1)
ppp = starting path of the product
rrr_rrr = starting and ending rows of the product
YYYYMMDD = acquisition date of the image
AAA = file type:
HPN = panchromatic band header file
HRF = VNIR/ SWIR bands header file
HTM = thermal bands header file
B10 = band 1
B20 = band 2
B30 = band 3
B40 = band 4
B50 = band 5
B61 = band 6L
B62 = band 6H
B70 = band 7
B80 = band 8
FST = FAST file extension
So you should give to driver one of the L7fppprrr_rrrYYYYMMDD_HPN.FST, L7fppprrr_rrrYYYYMMDD_HRF.FST or L7fppprrr_rrrYYYYMMDD_HTM.FST files.
<header>.<ext>
<header>.1.<ext>
<header>.2.<ext>
...
or
<header>.<ext>
band1.<ext>
band2.<ext>
...
or
<header>.<ext>
band1.dat
band2.dat
...
or
<header>.<ext>
imagery1.<ext>
imagery2.<ext>
...
or
<header>.<ext>
imagery1.dat
imagery2.dat
...
in lower or upper case. Header file could be named arbitrarily. This should cover majority of distributors fantasy in naming files. But if you out of luck and your datasets named differently you should rename them manually before importing data with GDAL.
GDAL also supports the logic for naming band files for datasets produced by Euromap GmbH for IRS-1C/IRS-1D PAN, LISS3 and WIFS sensors. Their filename logic is explained in the Euromap Naming Conventions document.
This driver supports reading and writing raster data in Oracle Spatial GeoRaster format (10g or later). The Oracle Spatial GeoRaster driver is optionally built as a GDAL plugin, but it requires Oracle client libraries.
When opening a GeoRaster, it's name should be specified in the form:
georaster:<user>{,/}<pwd>{,@}[db],[schema.][table],[column],[where] georaster:<user>{,/}<pwd>{,@}[db],<rdt>,<rid>
Where:
user = Oracle server user's name login
pwd = user password
db = Oracle server identification (database name)
schema = name of a schema
table = name of a GeoRaster table (table that contains GeoRaster
columns)
column = name of a column data type MDSYS.SDO_GEORASTER
where = a simple where clause to identify one or multiples
GeoRaster(s)
rdt = name of a raster data table
rid = numeric identification of one GeoRaster
Examples:
geor:scott,tiger,demodb,table,column,id=1
geor:scott,tiger,demodb,table,column,"id = 1"
"georaster:scott/tiger@demodb,table,column,gain>10"
"georaster:scott/tiger@demodb,table,column,city='Brasilia'"
georaster:scott,tiger,,rdt_10$,10
geor:scott/tiger,,rdt_10$,10
Note: do note use space around the field values and the commas.
Note: like in the last two examples, the database name field could be left empty (",,") and the TNSNAME will be used.
Note: If the query results in more than one GeoRaster it will be treated as a GDAL metadata's list of sub-datasets (see bellow)
By providing some basic information the GeoRaster driver is capable of listing the existing rasters stored on the server:
To list all the GeoRaster table on the server that belongs to that user name and database:% gdalinfo georaster:scott/tiger@db1
To list all the GeoRaster type columns that exist in that table:That will list all the GeoRaster objects stored in that table.
That will list all the GeoRaster existing on that table according to a Where clause.
% gdalinfo
georaster:scott/tiger
Driver: GeoRaster/Oracle Spatial GeoRaster
Subdatasets:
SUBDATASET_1_NAME=georaster:scott,tiger,,LANDSAT
SUBDATASET_1_DESC=Table:LANDSAT
SUBDATASET_2_NAME=georaster:scott,tiger,,GDAL_IMPORT
SUBDATASET_2_DESC=Table:GDAL_IMPORT
gdal_translate -of georaster landsat_823.tif geor:scott/tiger@orcl,landsat,raster -co DESCRIPTION="(ID NUMBER, NAME VARCHAR2(40), RASTER MDSYS.SDO_GEORASTER)" -co INSERT="VALUES (1,'Scene 823',SDO_GEOR.INIT())"
gdal_translate -of georaster landsat_825.tif geor:scott/tiger@orcl,landsat,raster \
-co INSERT="ID, RASTER VALUES (2,SDO_GEOR.INIT())"
During the process of importing raster into a GeoRaster object it is possible to give the driver a simple SQL table definition and also a SQL insert/values clause to inform the driver about the table to be created and the values to be added to the newly created row. The following example does that:
%
gdal_translate -of georaster
Newpor.tif georaster:scott/tiger,,landsat,scene -co "DESCRIPTION=(ID
NUMBER, SITE VARCHAR2(45), SCENE MDSYS.SDO_GEORASTER)" \
-co
"INSERT=VALUES(1,'West fields',
SDO_GEOR.INIT())" \
-co "BLOCKXSIZE=512
" -co "BLOCKYSIZE=512" -co
"BLOCKBSIZE=3" \
-co "INTERLEAVE=PIXEL
" -co "COMPRESS=JPEG-B"
Note that the create option DESCRIPTION requires to inform table name (in bold). And column name (underlined) should match the description:
%
gdal_translate -of georaster landsat_1.tif georaster:scott/tiger,,
landsat,
scene \
-co "DESCRIPTION=(ID NUMBER, SITE VARCHAR2(45),
SCENE MDSYS.SDO_GEORASTER)" \
-co "INSERT=VALUES(1,'West fields',
SDO_GEOR.INIT())"
If the table "landsat" exist, the option "DESCRIPTION" is ignored. The driver can only update one GeoRaster column per run of gdal_translate. Oracle create default names and values for RDT and RID during the initialization of the SDO_GEORASTER object but user are also able to specify a name and value of their choice.
%
gdal_translate -of georaster
landsat_1.tif georaster:scott/tiger,,landsat,scene \
-co "INSERT=VALUES(10,'Main building',
SDO_GEOR.INIT('RDT', 10))"
If no information is given about where to store the raster the driver will create (if doesn't exist already) a default table named GDAL_IMPORT with just one GeoRaster column named RASTER and a table GDAL_RDT as the RDT, the RID will be given automatically by the server, example:
%
gdal_translate -of georaster input.tif “geor:scott/tiger@dbdemo”
A GIF image with transparency will have that entry marked as having an alpha value of 0.0 (transparent). Also, the transparent value will be returned as the NoData value for the band.
If an ESRI world file exists with the .gfw, .gifw or .wld extension, it will be read and used to establish the geotransform for the image.
Starting with GDAL 1.9.0, XMP metadata can be extracted from the file, and will be stored as XML raw content in the xml:XMP metadata domain.
WORLDFILE=ON: Force the generation of an associated ESRI world file (.wld).
Interlaced (progressive) GIF files can be generated by supplying the INTERLACING=ON option on creation.
Starting with GDAL 1.7.0, GDAL's internal GIF support is implemented based on source from the giflib 4.1.6 library (written by Gershon Elbor, Eric Raymond and Toshio Kuratomi), hence generating LZW compressed GIF.
The driver was written with the financial support of the DM Solutions Group, and CIET International.
See Also:
GRASS rasters can be selected in several ways.
eg.
% gdalinfo /u/data/grassdb/proj_tm/PERMANENT/cellhd/proj_tm
eg.
% gdalinfo /usr2/data/grassdb/imagery/raw/group/testmff/REF % gdalinfo /usr2/data/grassdb/imagery/raw/group/testmff
The GDAL GRIB driver is based on a modified version of the degrib application which is written primarily by Arthur Taylor of NOAA NWS NDFD (MDL). The degrib application (and the GDAL GRIB driver) are built on the g2clib grib decoding library written primarily by John Huddleston of NOAA NWS NCEP.
There are several encoding schemes for raster data in GRIB format. Most common ones should be supported including PNG encoding. JPEG2000 encoded GRIB files will generally be supported if GDAL is also built with JPEG2000 support via one of the GDAL JPEG2000 drivers. The JasPer library generally provides the best jpeg2000 support for the GRIB driver.
GRIB files may a be represented in GDAL as having many bands, with some sets of bands representing a time sequence. GRIB bands are represented as Float64 (double precision floating point) regardless of the actual values. GRIB metadata is captured as per-band metadata and used to set band descriptions, similar to this:
Description = 100000[Pa] ISBL="Isobaric surface" GRIB_UNIT=[gpm] GRIB_COMMENT=Geopotential height [gpm] GRIB_ELEMENT=HGT GRIB_SHORT_NAME=100000-ISBL GRIB_REF_TIME= 1201100400 sec UTC GRIB_VALID_TIME= 1201104000 sec UTC GRIB_FORECAST_SECONDS=3600 secGRIB2 files may also include an extract of the product definition template number (octet 8-9), and the product definition template values (octet 10+) as metadata like this:
GRIB_PDS_PDTN=0 GRIB_PDS_TEMPLATE_NUMBERS=3 5 2 0 105 0 0 0 1 0 0 0 1 100 0 0 1 134 160 255 0 0 0 0 0
Most forms of TIFF and GeoTIFF files are supported by GDAL for reading, and somewhat less varieties can be written.
When built with internal libtiff or with libtiff >= 4.0, GDAL also supports reading and writing BigTIFF files (evolution of the TIFF format to support files larger than 4 GB).
Currently band types of Byte, UInt16, Int16, UInt32, Int32, Float32, Float64, CInt16, CInt32, CFloat32 and CFloat64 are supported for reading and writing. Paletted images will return palette information associated with the band. The compression formats listed below should be supported for reading as well.
As well, one bit files, and some other unusual formulations of GeoTIFF file, such as YCbCr color model files, are automatically translated into RGBA (red, green, blue, alpha) form, and treated as four eight bit bands.
Most GeoTIFF projections should be supported, with the caveat that in order to translate uncommon Projected, and Geographic coordinate systems into OGC WKT it is necessary to have the EPSG .csv files available. They must be found at the location pointed to by the GEOTIFF_CSV environment variable.
Georeferencing from GeoTIFF is supported in the form of one tiepoint and pixel size, a transformation matrix, or a list of GCPs.
If no georeferencing information is available in the TIFF file itself, GDAL will also check for, and use an ESRI world file with the extention .tfw, .tifw/.tiffw or .wld, as well as a MapInfo .tab file (only control points used, Coordsys ignored).
GDAL can read and write the RPCCoefficientTag as described in the RPCs in GeoTIFF proposed extension. The tag is written only for files created with the default profile GDALGeoTIFF. For other profiles, a .RPB file is created. In GDAL data model, the RPC coefficients are stored into the RPC metadata domain. For more details, see the RPC Georeferencing RFC. If .RPB or _RPC.TXT files are found, they will be used to read the RPCs, even if the RPCCoefficientTag tag is set.
(from GDAL 1.6.0)
TIFF files can contain internal transparency masks. The GeoTIFF driver recognizes an internal directory as being a transparency mask when the FILETYPE_MASK bit value is set on the TIFFTAG_SUBFILETYPE tag. According to the TIFF specification, such internal transparency masks contain 1 sample of 1-bit data. Although the TIFF specification allows for higher resolutions for the internal transparency mask, the GeoTIFF driver only supports internal transparency masks of the same dimensions as the main image. Transparency masks of internal overviews are also supported.
When the GDAL_TIFF_INTERNAL_MASK configuration option is set to YES and the GeoTIFF file is opened in update mode, the CreateMaskBand() method on a TIFF dataset or rasterband will create an internal transparency mask. Otherwise, the default behaviour of nodata mask creation will be used, that is to say the creation of a .msk file, as per RFC 15
Starting with GDAL 1.8.0, 1-bit internal mask band are deflate compressed. When reading them back, to make conversion between mask band and alpha band easier, mask bands are exposed to the user as being promoted to full 8 bits (i.e. the value for unmasked pixels is 255) unless the GDAL_TIFF_INTERNAL_MASK_TO_8BIT configuration option is set to NO. This does not affect the way the mask band is written (it is always 1-bit).
The GeoTIFF driver supports reading, creation and update of internal overviews. Internal overviews can be created on GeoTIFF files opened in update mode (with gdaladdo for instance). If the GeoTIFF file is opened as read only, the creation of overviews will be done in an external .ovr file. Overview are only updated on request with the BuildOverviews() method.
(From GDAL 1.6.0) If a GeoTIFF file has a transparency mask and the GDAL_TIFF_INTERNAL_MASK environment variable is set to YES and the GeoTIFF file is opened in update mode, BuildOverviews() will automatically create overviews for the internal transparency mask. These overviews will be refreshed by further calls to BuildOverviews() even if GDAL_TIFF_INTERNAL_MASK is not set to YES.
(From GDAL 1.8.0) The block size (tile width and height) used for overviews (internal or external) can be specified by setting the GDAL_TIFF_OVR_BLOCKSIZE environment variable to a power-of-two value between 64 and 4096. The default value is 128.
GDAL can deal with the following baseline TIFF tags as dataset-level metadata :
The name of the metadata item to use is one of the above names ("TIFFTAG_DOCUMENTNAME", ...).
Other non standard metadata items can be stored in a TIFF file created with the profile GDALGeoTIFF (the default, see below in the Creation issues section). Those metadata items are grouped together into a XML string stored in the non standard TIFFTAG_GDAL_METADATA ASCII tag (code 42112). When BASELINE or GeoTIFF profile are used, those non standard metadata items are stored into a PAM .aux.xml file.
The value of GDALMD_AREA_OR_POINT ("AREA_OR_POINT") metadata item is stored in the GeoTIFF key RasterPixelIsPoint for GDALGeoTIFF or GeoTIFF profiles.
Starting with GDAL 1.9.0, XMP metadata can be extracted from the file, and will be stored as XML raw content in the xml:XMP metadata domain.
GDAL stores band nodata value in the non standard TIFFTAG_GDAL_NODATA ASCII tag (code 42113) for files created with the default profile GDALGeoTIFF. Note that all bands must use the same nodata value. When BASELINE or GeoTIFF profile are used, the nodata value is stored into a PAM .aux.xml file.
GeoTIFF files can be created with any GDAL defined band type, including the complex types. Created files may have any number of bands. Files with exactly 3 bands will be given a photometric interpretation of RGB, files with exactly four bands will have a photometric interpretation of RGBA, while all other combinations will have a photometric interpretation of MIN_IS_WHITE. Files with pseudo-color tables, or GCPs can currently only be created when creating from an existing GDAL dataset with those objects (GDALDriver::CreateCopy()).
Note that the GeoTIFF format does not support parametric description of datums, so TOWGS84 parameters in coordinate systems are lost in GeoTIFF format.
TFW=YES: Force the generation of an associated ESRI world file (.tfw).See a World Files section for details.
INTERLEAVE=[BAND,PIXEL]: By default TIFF files with pixel interleaving (PLANARCONFIG_CONTIG in TIFF terminology) are created. These are slightly less efficient than BAND interleaving for some purposes, but some applications only support pixel interleaved TIFF files.
TILED=YES: By default stripped TIFF files are created. This option can be used to force creation of tiled TIFF files.
BLOCKXSIZE=n: Sets tile width, defaults to 256.
BLOCKYSIZE=n: Set tile or strip height. Tile height defaults to 256, strip height defaults to a value such that one strip is 8K or less.
NBITS=n: Create a file with less than 8 bits per sample by passing a value from 1 to 7. The apparent pixel type should be Byte. From GDAL 1.6.0, values of n=9...15 (UInt16 type) and n=17...31 (UInt32 type) are also accepted.
COMPRESS=[JPEG/LZW/PACKBITS/DEFLATE/CCITTRLE/CCITTFAX3/CCITTFAX4/NONE] : Set the compression to use. JPEG should generally only be used with Byte data (8 bit per channel). But starting with GDAL 1.7.0 and provided that GDAL is built with internal libtiff and libjpeg, it is possible to read and write TIFF files with 12bit JPEG compressed TIFF files (seen as UInt16 bands with NBITS=12). See the "8 and 12 bit JPEG in TIFF" wiki page for more details. The CCITT compression should only be used with 1bit (NBITS=1) data. None is the default.
PREDICTOR=[1/2/3]: Set the predictor for LZW or DEFLATE compression. The default is 1 (no predictor), 2 is horizontal differencing and 3 is floating point prediction.
SPARSE_OK=TRUE/FALSE (From GDAL 1.6.0): Should newly created files be allowed to be sparse? Sparse files have 0 tile/strip offsets for blocks never written and save space; however, most non-GDAL packages cannot read such files. The default is FALSE.
JPEG_QUALITY=[1-100]: Set the JPEG quality when using JPEG compression. A value of 100 is best quality (least compression), and 1 is worst quality (best compression). The default is 75.
ZLEVEL=[1-9]: Set the level of compression when using DEFLATE compression. A value of 9 is best, and 1 is least compression. The default is 6.
PHOTOMETRIC=[MINISBLACK/MINISWHITE/RGB/CMYK/YCBCR/CIELAB/ICCLAB/ITULAB] : Set the photometric interpretation tag. Default is MINISBLACK, but if the input image has 3 or 4 bands of Byte type, then RGB will be selected. You can override default photometric using this option.
ALPHA=YES: The first "extrasample" is marked as being alpha if there are any extra samples. This is necessary if you want to produce a greyscale TIFF file with an alpha band (for instance).
PROFILE=[GDALGeoTIFF/GeoTIFF/BASELINE]: Control what non-baseline tags are emitted by GDAL.
BIGTIFF=YES/NO/IF_NEEDED/IF_SAFER: Control whether the created file is a BigTIFF or a classic TIFF.
BigTIFF is a TIFF variant which can contain more than 4GiB of data (size of classic TIFF is limited by that value). This option is available if GDAL is built with libtiff library version 4.0 or higher (which is the case of the internal libtiff version from GDAL >= 1.5.0). The default is IF_NEEDED. (IF_NEEDED and IF_SAFER are available from GDAL 1.6.0).
When creating a new GeoTIFF with no compression, GDAL computes in advance the size of the resulting file. If that computed file size is over 4GiB, GDAL will automatically decide to create a BigTIFF file. However, when compression is used, it is not possible in advance to known the final size of the file, so classical TIFF will be chosen. In that case, the user must explicitely require the creation of a BigTIFF with BIGTIFF=YES if he anticipates the final file to be too big for classical TIFF format. If BigTIFF creation is not explicitely asked or guessed and that the resulting file is too big for classical TIFF, libtiff will fail with an error message like "TIFFAppendToStrip:Maximum TIFF file size exceeded".
PIXELTYPE=[DEFAULT/SIGNEDBYTE]: By setting this to SIGNEDBYTE, a new Byte file can be forced to be written as signed byte.
COPY_SRC_OVERVIEWS=[YES/NO]: (GDAL >= 1.8.0, CreateCopy() only) By setting this to YES (default is NO), the potential existing overviews of the source dataset will be copied to the target dataset without being recomputed. If overviews of mask band also exist, provided that the GDAL_TIFF_INTERNAL_MASK configuration option is set to YES, they will also be copied. Note that this creation option will have no effect if general options (i.e. options which are not creation options) of gdal_translate are used.
When translating a RGB image to JPEG-In-TIFF, using PHOTOMETRIC=YCBCR can make the size of the image typically 2 to 3 times smaller than the default photometric value (RGB). When using PHOTOMETRIC=YCBCR, the INTERLEAVE option must be kept to its default value (PIXEL), otherwise libtiff will fail to compress the data.
Note also that the dimensions of the tiles or strips must be a multiple of 8 for PHOTOMETRIC=RGB or 16 for PHOTOMETRIC=YCBCR
This paragraph lists the configuration options that can be set to alter the default behaviour of the GTiff driver.
See Also:
If the file contains only one SDS that appears to be an image, it may be accessed normally, but if it contains multiple images it may be necessary to import the file via a two step process. The first step is to get a report of the components images (SDS arrays) in the file using gdalinfo, and then to import the desired images using gdal_translate. The gdalinfo utility lists all multidimensional subdatasets from the input HDF file. The name of individual images (subdatasets) are assigned to the SUBDATASET_n_NAME metadata item. The description for each image is found in the SUBDATASET_n_DESC metadata item. For HDF4 images the subdataset names will be formatted like this:
HDF4_SDS:subdataset_type:file_name:subdataset_index
where subdataset_type shows predefined names for some of the well known HDF datasets, file_name is the name of the input file, and subdataset_index is the index of the image to use (for internal use in GDAL).
On the second step you should provide this name for gdalinfo or gdal_translate for actual reading of the data.
For example, we want to read data from the MODIS Level 1B dataset:
$ gdalinfo GSUB1.A2001124.0855.003.200219309451.hdf Driver: HDF4/Hierarchical Data Format Release 4 Size is 512, 512 Coordinate System is `' Metadata: HDFEOSVersion=HDFEOS_V2.7 Number of Scans=204 Number of Day mode scans=204 Number of Night mode scans=0 Incomplete Scans=0...a lot of metadata output skipped...
Subdatasets: SUBDATASET_1_NAME=HDF4_SDS:MODIS_L1B:GSUB1.A2001124.0855.003.200219309451.hdf:0 SUBDATASET_1_DESC=[408x271] Latitude (32-bit floating-point) SUBDATASET_2_NAME=HDF4_SDS:MODIS_L1B:GSUB1.A2001124.0855.003.200219309451.hdf:1 SUBDATASET_2_DESC=[408x271] Longitude (32-bit floating-point) SUBDATASET_3_NAME=HDF4_SDS:MODIS_L1B:GSUB1.A2001124.0855.003.200219309451.hdf:2 SUBDATASET_3_DESC=[12x2040x1354] EV_1KM_RefSB (16-bit unsigned integer) SUBDATASET_4_NAME=HDF4_SDS:MODIS_L1B:GSUB1.A2001124.0855.003.200219309451.hdf:3 SUBDATASET_4_DESC=[12x2040x1354] EV_1KM_RefSB_Uncert_Indexes (8-bit unsigned integer) SUBDATASET_5_NAME=HDF4_SDS:MODIS_L1B:GSUB1.A2001124.0855.003.200219309451.hdf:4 SUBDATASET_5_DESC=[408x271] Height (16-bit integer) SUBDATASET_6_NAME=HDF4_SDS:MODIS_L1B:GSUB1.A2001124.0855.003.200219309451.hdf:5 SUBDATASET_6_DESC=[408x271] SensorZenith (16-bit integer) SUBDATASET_7_NAME=HDF4_SDS:MODIS_L1B:GSUB1.A2001124.0855.003.200219309451.hdf:6 SUBDATASET_7_DESC=[408x271] SensorAzimuth (16-bit integer) SUBDATASET_8_NAME=HDF4_SDS:MODIS_L1B:GSUB1.A2001124.0855.003.200219309451.hdf:7 SUBDATASET_8_DESC=[408x271] Range (16-bit unsigned integer) SUBDATASET_9_NAME=HDF4_SDS:MODIS_L1B:GSUB1.A2001124.0855.003.200219309451.hdf:8 SUBDATASET_9_DESC=[408x271] SolarZenith (16-bit integer) SUBDATASET_10_NAME=HDF4_SDS:MODIS_L1B:GSUB1.A2001124.0855.003.200219309451.hdf:9 SUBDATASET_10_DESC=[408x271] SolarAzimuth (16-bit integer) SUBDATASET_11_NAME=HDF4_SDS:MODIS_L1B:GSUB1.A2001124.0855.003.200219309451.hdf:10 SUBDATASET_11_DESC=[408x271] gflags (8-bit unsigned integer) SUBDATASET_12_NAME=HDF4_SDS:MODIS_L1B:GSUB1.A2001124.0855.003.200219309451.hdf:12 SUBDATASET_12_DESC=[16x10] Noise in Thermal Detectors (8-bit unsigned integer) SUBDATASET_13_NAME=HDF4_SDS:MODIS_L1B:GSUB1.A2001124.0855.003.200219309451.hdf:13 SUBDATASET_13_DESC=[16x10] Change in relative responses of thermal detectors (8-bit unsigned integer) SUBDATASET_14_NAME=HDF4_SDS:MODIS_L1B:GSUB1.A2001124.0855.003.200219309451.hdf:14 SUBDATASET_14_DESC=[204x16x10] DC Restore Change for Thermal Bands (8-bit integer) SUBDATASET_15_NAME=HDF4_SDS:MODIS_L1B:GSUB1.A2001124.0855.003.200219309451.hdf:15 SUBDATASET_15_DESC=[204x2x40] DC Restore Change for Reflective 250m Bands (8-bit integer) SUBDATASET_16_NAME=HDF4_SDS:MODIS_L1B:GSUB1.A2001124.0855.003.200219309451.hdf:16 SUBDATASET_16_DESC=[204x5x20] DC Restore Change for Reflective 500m Bands (8-bit integer) SUBDATASET_17_NAME=HDF4_SDS:MODIS_L1B:GSUB1.A2001124.0855.003.200219309451.hdf:17 SUBDATASET_17_DESC=[204x15x10] DC Restore Change for Reflective 1km Bands (8-bit integer) Corner Coordinates: Upper Left ( 0.0, 0.0) Lower Left ( 0.0, 512.0) Upper Right ( 512.0, 0.0) Lower Right ( 512.0, 512.0) Center ( 256.0, 256.0)Now select one of the subdatasets, described as [12x2040x1354] EV_1KM_RefSB (16-bit unsigned integer):
$ gdalinfo HDF4_SDS:MODIS_L1B:GSUB1.A2001124.0855.003.200219309451.hdf:2 Driver: HDF4Image/HDF4 Internal Dataset Size is 1354, 2040 Coordinate System is `' Metadata: long_name=Earth View 1KM Reflective Solar Bands Scaled Integers...metadata skipped...
Corner Coordinates: Upper Left ( 0.0, 0.0) Lower Left ( 0.0, 2040.0) Upper Right ( 1354.0, 0.0) Lower Right ( 1354.0, 2040.0) Center ( 677.0, 1020.0) Band 1 Block=1354x2040 Type=UInt16, ColorInterp=Undefined Band 2 Block=1354x2040 Type=UInt16, ColorInterp=Undefined Band 3 Block=1354x2040 Type=UInt16, ColorInterp=Undefined Band 4 Block=1354x2040 Type=UInt16, ColorInterp=Undefined Band 5 Block=1354x2040 Type=UInt16, ColorInterp=Undefined Band 6 Block=1354x2040 Type=UInt16, ColorInterp=Undefined Band 7 Block=1354x2040 Type=UInt16, ColorInterp=Undefined Band 8 Block=1354x2040 Type=UInt16, ColorInterp=Undefined Band 9 Block=1354x2040 Type=UInt16, ColorInterp=Undefined Band 10 Block=1354x2040 Type=UInt16, ColorInterp=Undefined Band 11 Block=1354x2040 Type=UInt16, ColorInterp=Undefined Band 12 Block=1354x2040 Type=UInt16, ColorInterp=UndefinedOr you may use gdal_translate for reading image bands from this dataset.
Note that you should provide exactly the contents of the line marked SUBDATASET_n_NAME to GDAL, including the HDF4_SDS: prefix.
This driver is intended only for importing remote sensing and geospatial datasets in form of raster images. If you want explore all data contained in HDF file you should use another tools (you can find information about different HDF tools using links at end of this page).
By default the hdf4 driver only reads the gcps from every 10th row and column from EOS_SWATH datasets. You can change this behaviour by setting the GEOL_AS_GCPS environment variable to PARTIAL (default), NONE, or FULL.
"Signature=Created with GDAL (http://www.remotesensing.org/gdal/)"
and are automatically recognised when read, so the projection info and transformation matrix restored back.
Creation Options:
Please note, that NCSA HDF library compiled with several defaults which is defined in hlimits.h file. For example, hlimits.h defines the maximum number of opened files:
# define MAX_FILE 32If you need open more HDF4 files simultaneously you should change this value and rebuild HDF4 library (and relink GDAL if using static HDF libraries).
The first step is to get a report of the components images (arrays) in the file using gdalinfo, and then to import the desired images using gdal_translate. The gdalinfo utility lists all multidimensional subdatasets from the input HDF file. The name of individual images (subdatasets) are assigned to the SUBDATASET_n_NAME metadata item. The description for each image is found in the SUBDATASET_n_DESC metadata item. For HDF5 images the subdataset names will be formatted like this:
HDF5:file_name:subdataset
where:
file_name is the name of the input file, and
subdataset is the dataset name of the array to use (for
internal use in GDAL).
On the second step you should provide this name for gdalinfo or gdal_translate for actual reading of the data.
For example, we want to read data from the OMI/Aura Ozone (O3) dataset:
$ gdalinfo OMI-Aura_L2-OMTO3_2005m0326t2307-o03709_v002-2005m0428t201311.he5 Driver: HDF5/Hierarchical Data Format Release 5 Size is 512, 512 Coordinate System is `' Subdatasets: SUBDATASET_1_NAME=HDF5:"OMI-Aura_L2-OMTO3_2005m0113t0224-o02648_v002-2005m0625t035355.he5": //HDFEOS/SWATHS/OMI_Column_Amount_O3/Data_Fields/APrioriLayerO3 SUBDATASET_1_DESC=[1496x60x11] //HDFEOS/SWATHS/OMI_Column_Amount_O3/Data_Fields/APrioriLayerO3 (32-bit floating-point) SUBDATASET_2_NAME=HDF5:"OMI-Aura_L2-OMTO3_2005m0113t0224-o02648_v002-2005m0625t035355.he5": //HDFEOS/SWATHS/OMI_Column_Amount_O3/Data_Fields/AlgorithmFlags SUBDATASET_2_DESC=[1496x60] //HDFEOS/SWATHS/OMI_Column_Amount_O3/Data_Fields/AlgorithmFlags (8-bit unsigned character) SUBDATASET_3_NAME=HDF5:"OMI-Aura_L2-OMTO3_2005m0113t0224-o02648_v002-2005m0625t035355.he5": //HDFEOS/SWATHS/OMI_Column_Amount_O3/Data_Fields/CloudFraction SUBDATASET_3_DESC=[1496x60] //HDFEOS/SWATHS/OMI_Column_Amount_O3/Data_Fields/CloudFraction (32-bit floating-point) SUBDATASET_4_NAME=HDF5:"OMI-Aura_L2-OMTO3_2005m0113t0224-o02648_v002-2005m0625t035355.he5": //HDFEOS/SWATHS/OMI_Column_Amount_O3/Data_Fields/CloudTopPressure SUBDATASET_4_DESC=[1496x60] //HDFEOS/SWATHS/OMI_Column_Amount_O3/Data_Fields/CloudTopPressure (32-bit floating-point) SUBDATASET_5_NAME=HDF5:"OMI-Aura_L2-OMTO3_2005m0113t0224-o02648_v002-2005m0625t035355.he5": //HDFEOS/SWATHS/OMI_Column_Amount_O3/Data_Fields/ColumnAmountO3 SUBDATASET_5_DESC=[1496x60] //HDFEOS/SWATHS/OMI_Column_Amount_O3/Data_Fields/ColumnAmountO3 (32-bit floating-point) SUBDATASET_6_NAME=HDF5:"OMI-Aura_L2-OMTO3_2005m0113t0224-o02648_v002-2005m0625t035355.he5": //HDFEOS/SWATHS/OMI_Column_Amount_O3/Data_Fields/LayerEfficiency SUBDATASET_6_DESC=[1496x60x11] //HDFEOS/SWATHS/OMI_Column_Amount_O3/Data_Fields/LayerEfficiency (32-bit floating-point) SUBDATASET_7_NAME=HDF5:"OMI-Aura_L2-OMTO3_2005m0113t0224-o02648_v002-2005m0625t035355.he5": //HDFEOS/SWATHS/OMI_Column_Amount_O3/Data_Fields/NValue SUBDATASET_7_DESC=[1496x60x12] //HDFEOS/SWATHS/OMI_Column_Amount_O3/Data_Fields/NValue (32-bit floating-point) SUBDATASET_8_NAME=HDF5:"OMI-Aura_L2-OMTO3_2005m0113t0224-o02648_v002-2005m0625t035355.he5": //HDFEOS/SWATHS/OMI_Column_Amount_O3/Data_Fields/O3BelowCloud SUBDATASET_8_DESC=[1496x60] //HDFEOS/SWATHS/OMI_Column_Amount_O3/Data_Fields/O3BelowCloud (32-bit floating-point) SUBDATASET_9_NAME=HDF5:"OMI-Aura_L2-OMTO3_2005m0113t0224-o02648_v002-2005m0625t035355.he5": //HDFEOS/SWATHS/OMI_Column_Amount_O3/Data_Fields/QualityFlags SUBDATASET_9_DESC=[1496x60] //HDFEOS/SWATHS/OMI_Column_Amount_O3/Data_Fields/QualityFlags (16-bit unsigned integer) SUBDATASET_10_NAME=HDF5:"OMI-Aura_L2-OMTO3_2005m0113t0224-o02648_v002-2005m0625t035355.he5": //HDFEOS/SWATHS/OMI_Column_Amount_O3/Data_Fields/Reflectivity331 SUBDATASET_10_DESC=[1496x60] //HDFEOS/SWATHS/OMI_Column_Amount_O3/Data_Fields/Reflectivity331 (32-bit floating-point) SUBDATASET_11_NAME=HDF5:"OMI-Aura_L2-OMTO3_2005m0113t0224-o02648_v002-2005m0625t035355.he5": //HDFEOS/SWATHS/OMI_Column_Amount_O3/Data_Fields/Reflectivity360 SUBDATASET_11_DESC=[1496x60] //HDFEOS/SWATHS/OMI_Column_Amount_O3/Data_Fields/Reflectivity360 (32-bit floating-point) SUBDATASET_12_NAME=HDF5:"OMI-Aura_L2-OMTO3_2005m0113t0224-o02648_v002-2005m0625t035355.he5": //HDFEOS/SWATHS/OMI_Column_Amount_O3/Data_Fields/Residual SUBDATASET_12_DESC=[1496x60x12] //HDFEOS/SWATHS/OMI_Column_Amount_O3/Data_Fields/Residual (32-bit floating-point) SUBDATASET_13_NAME=HDF5:"OMI-Aura_L2-OMTO3_2005m0113t0224-o02648_v002-2005m0625t035355.he5": //HDFEOS/SWATHS/OMI_Column_Amount_O3/Data_Fields/ResidualStep1 SUBDATASET_13_DESC=[1496x60x12] //HDFEOS/SWATHS/OMI_Column_Amount_O3/Data_Fields/ResidualStep1 (32-bit floating-point) SUBDATASET_14_NAME=HDF5:"OMI-Aura_L2-OMTO3_2005m0113t0224-o02648_v002-2005m0625t035355.he5": //HDFEOS/SWATHS/OMI_Column_Amount_O3/Data_Fields/ResidualStep2 SUBDATASET_14_DESC=[1496x60x12] //HDFEOS/SWATHS/OMI_Column_Amount_O3/Data_Fields/ResidualStep2 (32-bit floating-point) SUBDATASET_15_NAME=HDF5:"OMI-Aura_L2-OMTO3_2005m0113t0224-o02648_v002-2005m0625t035355.he5": //HDFEOS/SWATHS/OMI_Column_Amount_O3/Data_Fields/SO2index SUBDATASET_15_DESC=[1496x60] //HDFEOS/SWATHS/OMI_Column_Amount_O3/Data_Fields/SO2index (32-bit floating-point) SUBDATASET_16_NAME=HDF5:"OMI-Aura_L2-OMTO3_2005m0113t0224-o02648_v002-2005m0625t035355.he5": //HDFEOS/SWATHS/OMI_Column_Amount_O3/Data_Fields/Sensitivity SUBDATASET_16_DESC=[1496x60x12] //HDFEOS/SWATHS/OMI_Column_Amount_O3/Data_Fields/Sensitivity (32-bit floating-point) SUBDATASET_17_NAME=HDF5:"OMI-Aura_L2-OMTO3_2005m0113t0224-o02648_v002-2005m0625t035355.he5": //HDFEOS/SWATHS/OMI_Column_Amount_O3/Data_Fields/StepOneO3 SUBDATASET_17_DESC=[1496x60] //HDFEOS/SWATHS/OMI_Column_Amount_O3/Data_Fields/StepOneO3 (32-bit floating-point) SUBDATASET_18_NAME=HDF5:"OMI-Aura_L2-OMTO3_2005m0113t0224-o02648_v002-2005m0625t035355.he5": //HDFEOS/SWATHS/OMI_Column_Amount_O3/Data_Fields/StepTwoO3 SUBDATASET_18_DESC=[1496x60] //HDFEOS/SWATHS/OMI_Column_Amount_O3/Data_Fields/StepTwoO3 (32-bit floating-point) SUBDATASET_19_NAME=HDF5:"OMI-Aura_L2-OMTO3_2005m0113t0224-o02648_v002-2005m0625t035355.he5": //HDFEOS/SWATHS/OMI_Column_Amount_O3/Data_Fields/TerrainPressure SUBDATASET_19_DESC=[1496x60] //HDFEOS/SWATHS/OMI_Column_Amount_O3/Data_Fields/TerrainPressure (32-bit floating-point) SUBDATASET_20_NAME=HDF5:"OMI-Aura_L2-OMTO3_2005m0113t0224-o02648_v002-2005m0625t035355.he5": //HDFEOS/SWATHS/OMI_Column_Amount_O3/Data_Fields/UVAerosolIndex SUBDATASET_20_DESC=[1496x60] //HDFEOS/SWATHS/OMI_Column_Amount_O3/Data_Fields/UVAerosolIndex (32-bit floating-point) SUBDATASET_21_NAME=HDF5:"OMI-Aura_L2-OMTO3_2005m0113t0224-o02648_v002-2005m0625t035355.he5": //HDFEOS/SWATHS/OMI_Column_Amount_O3/Data_Fields/dN_dR SUBDATASET_21_DESC=[1496x60x12] //HDFEOS/SWATHS/OMI_Column_Amount_O3/Data_Fields/dN_dR (32-bit floating-point) SUBDATASET_22_NAME=HDF5:"OMI-Aura_L2-OMTO3_2005m0113t0224-o02648_v002-2005m0625t035355.he5": //HDFEOS/SWATHS/OMI_Column_Amount_O3/Data_Fields/dN_dT SUBDATASET_22_DESC=[1496x60x12] //HDFEOS/SWATHS/OMI_Column_Amount_O3/Data_Fields/dN_dT (32-bit floating-point) SUBDATASET_23_NAME=HDF5:"OMI-Aura_L2-OMTO3_2005m0113t0224-o02648_v002-2005m0625t035355.he5": //HDFEOS/SWATHS/OMI_Column_Amount_O3/Geolocation_Fields/GroundPixelQualityFlags SUBDATASET_23_DESC=[1496x60] //HDFEOS/SWATHS/OMI_Column_Amount_O3/Geolocation_Fields/GroundPixelQualityFlags (16-bit unsigned integer) SUBDATASET_24_NAME=HDF5:"OMI-Aura_L2-OMTO3_2005m0113t0224-o02648_v002-2005m0625t035355.he5": //HDFEOS/SWATHS/OMI_Column_Amount_O3/Geolocation_Fields/Latitude SUBDATASET_24_DESC=[1496x60] //HDFEOS/SWATHS/OMI_Column_Amount_O3/Geolocation_Fields/Latitude (32-bit floating-point) SUBDATASET_25_NAME=HDF5:"OMI-Aura_L2-OMTO3_2005m0113t0224-o02648_v002-2005m0625t035355.he5": //HDFEOS/SWATHS/OMI_Column_Amount_O3/Geolocation_Fields/Longitude SUBDATASET_25_DESC=[1496x60] //HDFEOS/SWATHS/OMI_Column_Amount_O3/Geolocation_Fields/Longitude (32-bit floating-point) SUBDATASET_26_NAME=HDF5:"OMI-Aura_L2-OMTO3_2005m0113t0224-o02648_v002-2005m0625t035355.he5": //HDFEOS/SWATHS/OMI_Column_Amount_O3/Geolocation_Fields/RelativeAzimuthAngle SUBDATASET_26_DESC=[1496x60] //HDFEOS/SWATHS/OMI_Column_Amount_O3/Geolocation_Fields/RelativeAzimuthAngle (32-bit floating-point) SUBDATASET_27_NAME=HDF5:"OMI-Aura_L2-OMTO3_2005m0113t0224-o02648_v002-2005m0625t035355.he5": //HDFEOS/SWATHS/OMI_Column_Amount_O3/Geolocation_Fields/SolarAzimuthAngle SUBDATASET_27_DESC=[1496x60] //HDFEOS/SWATHS/OMI_Column_Amount_O3/Geolocation_Fields/SolarAzimuthAngle (32-bit floating-point) SUBDATASET_28_NAME=HDF5:"OMI-Aura_L2-OMTO3_2005m0113t0224-o02648_v002-2005m0625t035355.he5": //HDFEOS/SWATHS/OMI_Column_Amount_O3/Geolocation_Fields/SolarZenithAngle SUBDATASET_28_DESC=[1496x60] //HDFEOS/SWATHS/OMI_Column_Amount_O3/Geolocation_Fields/SolarZenithAngle (32-bit floating-point) SUBDATASET_29_NAME=HDF5:"OMI-Aura_L2-OMTO3_2005m0113t0224-o02648_v002-2005m0625t035355.he5": //HDFEOS/SWATHS/OMI_Column_Amount_O3/Geolocation_Fields/TerrainHeight SUBDATASET_29_DESC=[1496x60] //HDFEOS/SWATHS/OMI_Column_Amount_O3/Geolocation_Fields/TerrainHeight (16-bit integer) SUBDATASET_30_NAME=HDF5:"OMI-Aura_L2-OMTO3_2005m0113t0224-o02648_v002-2005m0625t035355.he5": //HDFEOS/SWATHS/OMI_Column_Amount_O3/Geolocation_Fields/ViewingAzimuthAngle SUBDATASET_30_DESC=[1496x60] //HDFEOS/SWATHS/OMI_Column_Amount_O3/Geolocation_Fields/ViewingAzimuthAngle (32-bit floating-point) SUBDATASET_31_NAME=HDF5:"OMI-Aura_L2-OMTO3_2005m0113t0224-o02648_v002-2005m0625t035355.he5": //HDFEOS/SWATHS/OMI_Column_Amount_O3/Geolocation_Fields/ViewingZenithAngle SUBDATASET_31_DESC=[1496x60] //HDFEOS/SWATHS/OMI_Column_Amount_O3/Geolocation_Fields/ViewingZenithAngle (32-bit floating-point) Corner Coordinates: Upper Left ( 0.0, 0.0) Lower Left ( 0.0, 512.0) Upper Right ( 512.0, 0.0) Lower Right ( 512.0, 512.0) Center ( 256.0, 256.0)Now select one of the subdatasets, described as [1645x60] CloudFraction (32-bit floating-point):
$ gdalinfo HDF5:"OMI-Aura_L2-OMTO3_2005m0326t2307-o03709_v002-2005m0428t201311.he5":CloudFraction Driver: HDF5Image/HDF5 Dataset Size is 60, 1645 Coordinate System is: GEOGCS["WGS 84", DATUM["WGS_1984", SPHEROID["WGS 84",6378137,298.257223563, AUTHORITY["EPSG","7030"]], TOWGS84[0,0,0,0,0,0,0], AUTHORITY["EPSG","6326"]], PRIMEM["Greenwich",0, AUTHORITY["EPSG","8901"]], UNIT["degree",0.0174532925199433, AUTHORITY["EPSG","9108"]], AXIS["Lat",NORTH], AXIS["Long",EAST], AUTHORITY["EPSG","4326"]] GCP Projection = GEOGCS["WGS 84",DATUM["WGS_1984",SPHEROID["WGS 84",6378137,298.257223563,AUTHORITY["EPSG","7030"]],TOWGS84[0,0,0,0,0,0,0], AUTHORITY["EPSG","6326"]],PRIMEM["Greenwich",0,AUTHORITY["EPSG","8901"]], UNIT["degree",0.0174532925199433, AUTHORITY["EPSG","9108"]],AXIS["Lat",NORTH],AXIS["Long",EAST], AUTHORITY["EPSG","4326"]] GCP[ 0]: Id=, Info= (0.5,0.5) -> (261.575,-84.3495,0) GCP[ 1]: Id=, Info= (2.5,0.5) -> (240.826,-85.9928,0) GCP[ 2]: Id=, Info= (4.5,0.5) -> (216.754,-86.5932,0) GCP[ 3]: Id=, Info= (6.5,0.5) -> (195.5,-86.5541,0) GCP[ 4]: Id=, Info= (8.5,0.5) -> (180.265,-86.2009,0) GCP[ 5]: Id=, Info= (10.5,0.5) -> (170.011,-85.7315,0) GCP[ 6]: Id=, Info= (12.5,0.5) -> (162.987,-85.2337,0)
... 3000 GCPs are read from the file if Latitude and Longitude arrays are presentsCorner Coordinates: Upper Left ( 0.0, 0.0) Lower Left ( 0.0, 1645.0) Upper Right ( 60.0, 0.0) Lower Right ( 60.0, 1645.0) Center ( 30.0, 822.5) Band 1 Block=60x1 Type=Float32, ColorInterp=Undefined Open GDAL Datasets: 1 N DriverIsNULL 512x512x0 You may use gdal_translate for reading image bands from this dataset.
Note that you should provide exactly the contents of the line marked SUBDATASET_n_NAME to GDAL, including the HDF5: prefix.
This driver is intended only for importing remote sensing and geospatial datasets in form of raster images(2D or 3D arrays). If you want explore all data contained in HDF file you should use another tools (you can find information about different HDF tools using links at end of this page).
Documentation to individual products, supported by this driver:
GDAL supports reading and writing HF2/HFZ/HF2.GZ heightfield raster datasets.
HF2 is a heightfield format that records difference between consecutive cell values. HF2 files can also optionnaly be compressed by the gzip algorithm, and so the HF2.GZ files (or HFZ, equivalently) may be significantly smaller than the uncompressed data. The file format enables the user to have control on the desired accuracy through the vertical precision parameter.
GDAL can read and write georefencing information through extended header blocks.
Compressed and missing tiles in Erdas files should be handled properly on read. Files between 2GiB and 4GiB in size should work on Windows NT, and may work on some Unix platforms. Files with external spill files (needed for datasets larger than 2GiB) are also support for reading and writing.
Metadata reading and writing is supported at the dataset level, and for bands, but this is GDAL specific metadata - not metadata in an Imagine recognised form. The metadata is stored in a table called GDAL_MetaData with each column being a metadata item. The title is the key and the row 1 value is the value.
Creation Options:
Layer names can be set and retrieved with the GDALSetDescription/GDALGetDescription calls on the Raster Band objects.
See Also:
This format is basically a raw one. There is just one band per files, except in the RGB24 data type where the Red, Green and Blue bands are store interleafed by pixels in the order Blue, Green and Red. The others data type are unsigned 8 bits integer with values from 0 to 255 or signed 16 bits integer with values from -32.768 to 32.767 or 32 bits single precision floating point.32 bits. The description of the file is stored in a companying text file, extension RDC.
The RDC image description file doesn't include color table, or detailed geographic referencing information. The color table if present can be obtained by another companying file using the same base name as the RST file and SMP as extension.
For geographical referencing identification, the RDC file contains information that points to a file that holds the geographic reference details. Those files uses extension REF and resides in the same folder as the RST image or more likely in the Idrisi installation folders.
Therefore the presence or absence of the Idrisi software in the running operation system will determine the way that this driver will work. By setting the environment variable IDRISIDIR pointing to the Idrisi main installation folder will enable GDAL to find more detailed information about geographical reference and projection in the REF files.
Note that the RST driver recognizes the name convention used in Idrisi for UTM and State Plane geographic reference so it doesn't need to access the REF files. That is the case for RDC file that specify "utm-30n" or "spc87ma1" in the "ref. system" field. Note that exporting to RST in any other geographical reference system will generate a suggested REF content in the comment section of the RDC file.
The Intergraph Raster File Format was the native file format used by Intergraph software applications to store raster data. It is manifested in several internal data formats.
.cot |
8-bit grayscale or color table data |
.ctc |
8-bit grayscale using PackBits-type compression (uncommon) |
.rgb |
24-bit color and grayscale (uncompressed and PackBits-type compression) |
.ctb |
8-bit color table data (uncompressed or run-length encoded) |
.grd |
8, 16 and 32 bit elevation data |
.crl |
8 or 16 bit, run-length compressed grayscale or color table data |
.tpe |
8 or 16 bit, run-length compressed grayscale or color table data |
.lsr |
8 or 16 bit, run-length compressed grayscale or color table data |
.rle |
1-bit run-length compressed data (16-bit runs) |
.cit |
CCITT G3 or G4 1-bit data |
.g3 |
CCITT G3 1-bit data |
.g4 |
CCITT G4 1-bit data |
.tg4 |
CCITT G4 1-bit data (tiled) |
.cmp |
JPEG grayscale, RGB, or CMYK |
.jpg |
JPEG grayscale, RGB, or CMYK |
ISIS2 files often have the extension .cub, sometimes with an associated .lbl (label) file. When a .lbl file exists it should be used as the dataset name rather than the .cub file.
In addition to support for most ISIS2 imagery configurations, this driver also reads georeferencing and coordinate system information as well as selected other header metadata.
Implementation of this driver was supported by the United States Geological Survey.
ISIS2 is part of a family of related formats including PDS and ISIS3.
LABELING_METHOD=ATTACHED/DETACHED: Determines whether the header labelling should be in the same file as the imagery (the default - ATTACHED) or in a separate file (DETACHED).
IMAGE_EXTENSION=extension: Set the extension used for detached image files, defaults to "cub". Only used if LABELING_METHOD=DETACHED.
ISIS3 files often have the extension .cub, sometimes with an associated .lbl (label) file. When a .lbl file exists it should be used as the dataset name rather than the .cub file.
In addition to support for most ISIS3 imagery configurations, this driver also reads georeferencing and coordinate system information as well as selected other header metadata.
Implementation of this driver was supported by the United States Geological Survey.
ISIS3 is part of a family of related formats including PDS and ISIS2.
Coordinate system and georeferencing transformations are read, and some degree of support is included for GeoJP2 (tm) (GeoTIFF-in-JPEG2000), ERDAS GML-in-JPEG2000, and the new GML-in-JPEG2000 specification developed at OGC.
Support for the JP2ECW driver in GDAL is optional, and requires linking in external ECW SDK libraries provided by ERDAS.
For those still using the ECW 3.3 SDK, images less than 500MB may be compressed for free, while larger images require licensing from ERDAS. See the licensing agreement and the LARGE_OK option.
Creation Options:
The JPEG2000 file format supports lossy and lossless compression of 8bit and 16bit images with 1 or more bands (components). Via the GeoJP2 (tm) mechanism, GeoTIFF style coordinate system and georeferencing information can be embedded within the JP2 file. JPEG2000 files use a substantially different format and compression mechanism than the traditional JPEG compression and JPEG JFIF format. They are distinct compression mechanisms produced by the same group. JPEG2000 is based on wavelet compression.
The JPEG2000 driver documented on this page (the JP2KAK driver) is implemented on top of the commercial Kakadu library. This is a high quality and high performance JPEG2000 library in wide used in the geospatial and general imaging community. However, it is not free, and so normally builds of GDAL from source will not include support for this driver unless the builder purchases a license for the library and configures accordingly. GDAL includes another JPEG2000 driver based on the free JasPer library.
When reading images this driver will represent the bands as being Byte (8bit unsigned), 16 bit signed or 16 bit unsigned. Georeferencing and coordinate system information will be available if the file is a GeoJP2 (tm) file. Files color encoded in YCbCr color space will be automatically translated to RGB. Paletted images are also supported.
Starting with GDAL 1.9.0, XMP metadata can be extracted from JPEG2000 files, and will be stored as XML raw content in the xml:XMP metadata domain.
JPEG2000 overviews are maintained as part of the mathematical description of the image. Overviews cannot be built as a separate process, but on read the image will generally be represented as having overview levels at various power of two factors.
Creation Options:
JPEG2000 MrSID support is only available with the version 5.x or newer DSDK and ESDK.
<?xml version="1.0"?> <Jp2Profile version="1.0"> <Header> <name>Default</name> <description>LizardTech preferred settings (20051216)</description> </Header> <Codestream> <layers> 8 </layers> <levels> 99 </levels> <tileSize> 0 0 </tileSize> <progressionOrder> RPCL </progressionOrder> <codeblockSize> 64 64 </codeblockSize> <pltMarkers> true </pltMarkers> <wavelet97> false </wavelet97> <precinctSize> 256 256 </precinctSize> </Codestream> </Jp2Profile>
(GDAL >= 1.8.0)
This driver is an implementation of a JPEG2000 reader/writer based on OpenJPEG library v2.
The v2 branch is - at the time of writing - still unreleased, so you'll have to pull it yourself from its Subversion repository : http://openjpeg.googlecode.com/svn/branches/v2. OpenJPEG trunk is still deriving on the 1.3 series that doesn't provide access to tile-level reading. v2 branch enables reading big JPEG2000 images without loading them entirely in memory. This is a noticeable improvement in comparison with the JPEG2000 driver based on Jasper library.
The v2 branch also adds the capability to use the VSI Virtual File API, so the driver is able to read JPEG2000 compressed NITF files.
Starting with GDAL 1.9.0, XMP metadata can be extracted from JPEG2000 files, and will be stored as XML raw content in the xml:XMP metadata domain.
In creation, the driver doesn't support writing GeoJP2 nor GMLJP2.
CODEC=JP2/J2K : JP2 will add JP2 boxes around the codestream data. The value is determined automatically from the file extension. If it's neither JP2 nor J2K, J2K codec is used.
QUALITY : Percentage between 0 and 100. A value of 50 means the file will be half-size in comparison to uncompressed data, 33 means 1/3, etc.. Defaults to 25
REVERSIBLE=YES/NO : YES means lossless compression. Defaults to NO.
RESOLUTIONS : Number of resolution levels. Between 1 and 7. Defaults to 6.
BLOCKXSIZE : Tile width. Defaults to 1024.
BLOCKYSIZE : Tile height. Defaults to 1024.
PROGRESSION : Progession order : LRCP, RLCP, RPCL, PCRL or CPRL. Defaults to LRCP.
SOP=YES/NO : YES means generate SOP marker segments. Defaults to NO.
EPH=YES/NO : YES means generate EPH marker segments. Defaults to NO.
Implemented as gdal/frmts/openjpeg/openjpegdataset.cpp.
Starting with GDAL 1.9.0, XMP metadata can be extracted from JPEG2000 files, and will be stored as XML raw content in the xml:XMP metadata domain.
``The following options are supported by the encoder:
imgareatlx=x | Set the x-coordinate of the top-left corner of the image area to x. | ||||||||||||
imgareatly=y | Set the y-coordinate of the top-left corner of the image area to y. | ||||||||||||
tilegrdtlx=x | Set the x-coordinate of the top-left corner of the tiling grid to x. | ||||||||||||
tilegrdtly=y | Set the y-coordinate of the top-left corner of the tiling grid to y. | ||||||||||||
tilewidth=w | Set the nominal tile width to w. | ||||||||||||
tileheight=h | Set the nominal tile height to h. | ||||||||||||
prcwidth=w | Set the precinct width to w. The argument w must be an integer power of two. The default value is 32768. | ||||||||||||
prcheight=h | Set the precinct height to h. The argument h must be an integer power of two. The default value is 32768. | ||||||||||||
cblkwidth=w | Set the nominal code block width to w. The argument w must be an integer power of two. The default value is 64. | ||||||||||||
cblkheight=h | Set the nominal code block height to h. The argument h must be an integer power of two. The default value is 64. | ||||||||||||
mode=m | Set the coding mode to m. The argument m must
have one of the following values:
| ||||||||||||
rate=r | Specify the target rate. The argument r is a positive real number. Since a rate of one corresponds to no compression, one should never need to explicitly specify a rate greater than one. By default, the target rate is considered to be infinite. | ||||||||||||
ilyrrates=[, ,. . . , ] | Specify the rates for any intermediate layers. The argument to this option is a comma separated list of N rates. Each rate is a positive real number. The rates must increase monotonically. The last rate in the list should be less than or equal to the overall rate (as specified with the rate option). | ||||||||||||
prg=p | Set the progression order to p. The argument p
must have one of the following values:
| ||||||||||||
nomct | Disallow the use of any multicomponent transform. | ||||||||||||
numrlvls=n | Set the number of resolution levels to n. The argument n must be an integer that is greater than or equal to one. The default value is 6. | ||||||||||||
sop | Generate SOP marker segments. | ||||||||||||
eph | Generate EPH marker segments. | ||||||||||||
lazy | Enable lazy coding mode (a.k.a. arithmetic coding bypass). | ||||||||||||
termall | Terminate all coding passes. | ||||||||||||
segsym | Use segmentation symbols. | ||||||||||||
vcausal | Use vertically stripe causal contexts. | ||||||||||||
pterm | Use predictable termination. | ||||||||||||
resetprob | Reset the probability models after each coding pass. | ||||||||||||
numgbits=n | Set the number of guard bits to n.'' |
The driver will automatically convert images whose color space is YCbCr, CMYK or YCbCrK to RGB, unless GDAL_JPEG_TO_RGB is set to NO (YES is the default). When color space translation to RGB is done, the source color space is indicated in the SOURCE_COLOR_SPACE metedata of the IMAGE_STRUCTURE domain.
There is currently no support for georeferencing information or metadata for JPEG files. But if an ESRI world file exists with the .jgw, .jpgw/.jpegw or .wld suffixes, it will be read and used to establish the geotransform for the image. If available a MapInfo .tab file will also be used for georeferencing. Overviews can be built for JPEG files as an external .ovr file.
The driver also supports the "zlib compressed mask appended to the file" approach used by a few data providers to add a bitmask to identify pixels that are not valid data. See RFC 15 for further details.
The GDAL JPEG Driver is built using the Independent JPEG Group's jpeg library. Also note that the GeoTIFF driver supports tiled TIFF with JPEG compressed tiles.
To be able to read and write JPEG images with 12-bit sample, you can build GDAL with its internal libjpeg (based on IJG libjpeg-6b, with additional changes for 12-bit sample support), or explicitely pass --with-jpeg12=yes to configure script when building with external libjpeg. See "8 and 12 bit JPEG in TIFF" wiki page for more details.
It is also possible to use the JPEG driver with the libjpeg-turbo, a version of libjpeg, API and ABI compatible with IJG libjpeg-6b, which uses MMX, SSE, and SSE2 SIMD instructions to accelerate baseline JPEG compression/decompression.
Starting with GDAL 1.9.0, XMP metadata can be extracted from the file, and will be stored as XML raw content in the xml:XMP metadata domain.
(GDAL >= 1.8.0)
This driver is an implementation of a JPEG-LS reader/writer based on the Open Source CharLS library (BSD style license). At the time of writing, the CharLS library in its 1.0 version has no "make install" target. Consequently, the integration of the driver in the GDAL build system is a big rough. On Unix, you have to edit the GDALmake.opt file and edit the lines related to CHARLS.
The driver can read and write lossless or near-lossless images. Note that it is not aimed at dealing with too big images (unless enough virtual memory is available), since the whole image must be compressed/decompressed in a single operation.
INTERLEAVE=PIXEL/LINE/BAND : Data interleaving in compressed stream. Default to BAND.
LOSS_FACTOR=error_threshold : 0 (the default) means loss-less compression. Any higher value will be the maximum bound for the error.
Implemented as gdal/frmts/jpegls/jpeglsdataset.cpp.
JPEG 2000 Interactive Protocol (JPIP) flexibility with respect to random access, code stream reordering and incremental decoding is highly exploitable in a networked environment allowing access to remote large files using limited bandwidth connections or high contention networks.
A brief overview of the JPIP event sequence is presented in this section, more information can be found at JPEG 2000 Interactive Protocol (Part 9 JPIP) and the specification can (and should) be purchased from ISO.
An earlier version of JPEG 2000 Part 9 is available here http://www.jpeg.org/public/fcd15444-9v2.pdf, noting the ISO copyright, diagrams are not replicated in this documentation.
The JPIP protocol has been abstracted in this format driver, requests are made at the 1:1 resolution level.
JPIP Sequence DiagramThe JPIPKAK driver uses an approach that was first demonstrated here, J2KViewer, by Juan Pablo Garcia Ortiz of separating the communication layer (socket / http) from the Kakadu kdu_cache object. Separating the communication layer from the data object is desirable since it allows the use of optimized http client libraries such as libcurl, Apache HttpClient (note that jportiz used a plain Java socket) and allows SSL communication between the client and server.
Kakadu's implementation of client communication with a JPIP server uses a socket, and this socket connection holds the state for this client session. A client session with Kakadu can be recreated using the JPIP cache operations between client and server, but no use of traditional HTTP cookies is supported since JPIP is neutral to the transport layer.
The JPIPKAK driver is written using a HTTP client library with the Kakadu cache object and supports optimized communication with a JPIP server (which may or may not support HTTP sessions) and the high performance of the kakadu kdu_region_decompressor.
The implementation supports the GDAL C++ and C API, and provides an initial SWIG wrapper for this driver with a Java ImageIO example ( TODO - qGIS Example).
The driver uses a simple threading model to support requesting reads of the data and remote fetching. This threading model supports two separate client windows, with just one connection to the server. Requests to the server are multiplexed to utilize available bandwidth efficiently. The client identifies these windows by using 0 (low) or 1 (high) values to a PRIORITY metadata request option.
Note: SSL support
If the client is built with support for SSL, then driver determines whether to use SSL if the request is a jpips:// protocol as opposed to jpip:// . Note that the driver does not verify server certificates using the Curl certificate bundle and is currently set to accept all SSL server certificates.
Note: libCurl
JPIP sets client/server values using HTTP headers, modifications have been made to the GDAL HTTP portability library to support this.
GDAL Sequence DiagramFetch the driver to which this dataset relates.
If the filename contained in the GDALOpenInfo
object
has a case insensitive URI scheme of jpip or jpips the
JPIPKAKDataset
is created and initialised, otherwise NULL is
returned.
Initialisation involves making an initial connection to the JPIP Server to establish a session and to retrieve the initial metadata about the image (ref. JPIP Sequence Diagram).
If the connection fails, the function returns false and the
Open
function returns NULL indicating that opening the dataset
with this driver failed.
If the connection is successful, then subsequent requests to the
JPIP server are made to retrieve all the available metadata about the
image. Metadata items are set using the
GDALMajorObject->SetMetadataItem
in the "JPIP" domain.
If the metadata returned from the server includes GeoJP2 UUID box, or a GMLJP2 XML box then this metadata is parsed and sets the geographic metadata of this dataset.
C API to JPIPKAKDataset->GetMetadata
returns metadata for the "JPIP" domain, keys are "JPIP_NQUALITYLAYERS", "JPIP_NRESOLUTIONLEVELS", "JPIP_NCOMPS" and "JPIP_SPRECISION"
If the asynchronous raster IO is active and not required, the C API
calls JPIPKAKDataset->EndAsyncRasterIO
The JPIPKAKAsyncRasterIO object is deleted
C API to JPIPKAKDataset->BeginAsyncRasterIO
The client has set the requested view window at 1:1 and have optionally set the discard level, quality layers and thread priority metadata items.
Creates a JPIPKAKAsyncRasterIO Object
Configures the kakadu machinery and starts a background thread (if
not already running) to communicate to the server the current view
window request. The background thread results in the kdu_cache
object being updated until the JPIP server sends an "End Of Response"
(EOR) message for the current view window request.
C API to LockBuffer
Not implemented in JPIPKAKAsyncRasterIO
, a lock is
acquired in JPIPKAKAsyncRasterIO->GetNextUpdatedRegion
C API to GetNextUpdatedRegion
The function decompresses the available data to generate an image
(according to the dataset buffer type set in
JPIPKAKDataset->BeginAsyncRasterIO
) The window width, height (at
the requested discard level) decompressed is returned in the region
pointer and can be rendered by the client. The status of the rendering
operation is one of GARIO_PENDING, GARIO_UPDATE, GARIO_ERROR,
GARIO_COMPLETE
from the GDALAsyncStatusType
structure. GARIO_UPDATE, GARIO_PENDING
require more reads
of GetNextUpdatedRegion to get the full image data, this is the
progressive rendering of JPIP. GARIO_COMPLETE
indicates
the window is complete.
GDALAsyncStatusType
is a structure used by
GetNextUpdatedRegion
to indicate whether the function should be
called again when either kakadu has more data in its cache to
decompress, or the server has not sent an End Of Response (EOR) message
to indicate the request window is complete.
The region passed into this function is passed by reference, and the caller can read this region when the result returns to find the region that has been decompressed. The image data is packed into the buffer, e.g. RGB if the region requested has 3 components.
C Api to UnlockBuffer
Not implemented in JPIPKAKAsyncRasterIO
, a lock is
acquired in JPIPKAKAsyncRasterIO->GetNextUpdatedRegion
Client renders image data
Currently only a Windows makefile is provided, however this should compile on Linux as well as there are no Windows dependencies.
See Also:
Driver originally developed by ITT VIS and donated to GDAL to enable SSL enabled JPIP client streaming of remote JPEG 2000 datasets.
Metadata records:
FARSITE v. 4 landscape file (LCP) is a multi-band raster format used by wildland fire behavior and fire effect simulation models such as FARSITE, FLAMMAP, and FBAT (www.fire.org ). The bands of an LCP file store data describing terrain, tree canopy, and surface fuel. The USGS National Map for LANDFIRE distributes data in LCP format, and programs such as FARSITE and LFDAT can create LCP files from a set of input rasters. The GDAL driver for LCP supports reading only.
An LCP file (.lcp) is basically a raw format with a 7,316-byte header described below. The data type for all bands is 16-bit signed integer. Bands are interleaved by pixel. Five bands are required: elevation, slope, aspect, fuel model, and tree canopy cover. Crown fuel bands (canopy height, canopy base height, canopy bulk density), and surface fuel bands (duff, coarse woody debris) are optional.
The LCP driver reads the linear unit, cell size, and extent, but the LCP file does not specify the projection. UTM projections are typical, but other projections are possible.
The GDAL LCP driver reports dataset- and band-level metadata:
LATITUDE: Latitude of the dataset, negative for southern hemisphere
LINEAR_UNIT: Feet or meters
DESCRIPTION: LCP file description
<band>_UNIT or <band>_OPTION: units or options code for the band
<band>_UNIT_NAME or <band>_OPTION_DESC: descriptive name of units/options
<band>_MIN: minimum value
<band>_MAX: maximum value
<band>_NUM_CLASSES: number of classes, -1 if > 100
<band>_VALUES: comma-delimited list of class values (fuel model band only)
<band>_FILE: original input raster file name for the band
Note: The LCP driver derives from the RawDataset helper class declared in gdal/frmts/raw. It should be implemented as gdal/frmts/raw/lcpdataset.cpp.
LCP header format:
Start byte | No. of bytes | Format | Name | Description |
0 | 4 | long | crown fuels | 20 if no
crown fuels, 21 if crown fuels exist (crown fuels = canopy
height, canopy base height, canopy bulk density) |
4 | 4 | long | ground fuels | 20 if no ground fuels, 21 if ground fuels exist (ground fuels = duff loading, coarse woody) |
8 | 4 | long | latitude | latitude (negative for southern hemisphere) |
12 | 8 | double | loeast | offset to preserve coordinate precision (legacy from 16-bit OS days) |
20 | 8 | double | hieast | offset to preserve coordinate precision (legacy from 16-bit OS days) |
28 | 8 | double | lonorth | offset to preserve coordinate precision (legacy from 16-bit OS days) |
36 | 8 | double | hinorth | offset to preserve coordinate precision (legacy from 16-bit OS days) |
44 | 4 | long | loelev | minimum elevation |
48 | 4 | long | hielev | maximum elevation |
52 | 4 | long | numelev | number of elevation classes, -1 if > 100 |
56 | 400 | long | elevation values | list of elevation values as longs |
456 | 4 | long | loslope | minimum slope |
460 | 4 | long | hislope | maximum slope |
464 | 4 | long | numslope | number of slope classes, -1 if > 100 |
468 | 400 | long | slope values | list of slope values as longs |
868 | 4 | long | loaspect | minimum aspect |
872 | 4 | long | hiaspect | maximum aspect |
876 | 4 | long | numaspects | number of aspect classes, -1 if > 100 |
880 | 400 | long | aspect values | list of aspect values as longs |
1280 | 4 | long | lofuel | minimum fuel model value |
1284 | 4 | long | hifuel | maximum fuel model value |
1288 | 4 | long | numfuel | number of fuel models -1 if > 100 |
1292 | 400 | long | fuel values | list of fuel model values as longs |
1692 | 4 | long | locover | minimum canopy cover |
1696 | 4 | long | hicover | maximum canopy cover |
1700 | 4 | long | numcover | number of canopy cover classes, -1 if > 100 |
1704 | 400 | long | cover values | list of canopy cover values as longs |
2104 | 4 | long | loheight | minimum canopy height |
2108 | 4 | long | hiheight | maximum canopy height |
2112 | 4 | long | numheight | number of canopy height classes, -1 if > 100 |
2116 | 400 | long | height values | list of canopy height values as longs |
2516 | 4 | long | lobase | minimum canopy base height |
2520 | 4 | long | hibase | maximum canopy base height |
2524 | 4 | long | numbase | number of canopy base height classes, -1 if > 100 |
2528 | 400 | long | base values | list of canopy base height values as longs |
2928 | 4 | long | lodensity | minimum canopy bulk density |
2932 | 4 | long | hidensity | maximum canopy bulk density |
2936 | 4 | long | numdensity | number of canopy bulk density classes, -1 if >100 |
2940 | 400 | long | density values | list of canopy bulk density values as longs |
3340 | 4 | long | loduff | minimum duff |
3344 | 4 | long | hiduff | maximum duff |
3348 | 4 | long | numduff | number of duff classes, -1 if > 100 |
3352 | 400 | long | duff values | list of duff values as longs |
3752 | 4 | long | lowoody | minimum coarse woody |
3756 | 4 | long | hiwoody | maximum coarse woody |
3760 | 4 | long | numwoodies | number of coarse woody classes, -1 if > 100 |
3764 | 400 | long | woody values | list of coarse woody values as longs |
4164 | 4 | long | numeast | number of raster columns |
4168 | 4 | long | numnorth | number of raster rows |
4172 | 8 | double | EastUtm | max X |
4180 | 8 | double | WestUtm | min X |
4188 | 8 | double | NorthUtm | max Y |
4196 | 8 | double | SouthUtm | min Y |
4204 | 4 | long | GridUnits | linear unit: 0 = meters, 1 = feet, 2 = kilometers |
4208 | 8 | double | XResol | cell size width in GridUnits |
4216 | 8 | double | YResol | cell size height in GridUnits |
4224 | 2 | short | EUnits | elevation units: 0 = meters, 1 = feet |
4226 | 2 | short | SUnits | slope units: 0 = degrees, 1 = percent |
4228 | 2 | short | AUnits | aspect units: 0 = Grass categories, 1 = Grass degrees, 2 = azimuth degrees |
4230 | 2 | short | FOptions | fuel model options: 0 = no custom models AND no conversion file, 1 = custom models BUT no conversion file, 2 = no custom models BUT conversion file, 3 = custom models AND conversion file needed |
4232 | 2 | short | CUnits | canopy cover units: 0 = categories (0-4), 1 = percent |
4234 | 2 | short | HUnits | canopy height units: 1 = meters, 2 = feet, 3 = m x 10, 4 = ft x 10 |
4236 | 2 | short | BUnits | canopy base height units: 1 = meters, 2 = feet, 3 = m x 10, 4 = ft x 10 |
4238 | 2 | short | PUnits | canopy bulk density units: 1 = kg/m^3, 2 = lb/ft^3, 3 = kg/m^3 x 100, 4 = lb/ft^3 x 1000 |
4240 | 2 | short | DUnits | duff units: 1 = Mg/ha x 10, 2 = t/ac x 10 |
4242 | 2 | short | WOptions | coarse woody options (1 if coarse woody band is present) |
4244 | 256 | char[] | ElevFile | elevation file name |
4500 | 256 | char[] | SlopeFile | slope file name |
4756 | 256 | char[] | AspectFile | aspect file name |
5012 | 256 | char[] | FuelFile | fuel model file name |
5268 | 256 | char[] | CoverFile | canopy cover file name |
5524 | 256 | char[] | HeightFile | canopy height file name |
5780 | 256 | char[] | BaseFile | canopy base file name |
6036 | 256 | char[] | DensityFile | canopy bulk density file name |
6292 | 256 | char[] | DuffFile | duff file name |
6548 | 256 | char[] | WoodyFile | coarse woody file name |
6804 | 512 | char[] | Description | LCP file description |
Chris Toney, 2009-02-14
Blocks are organized as pixel-high scanlines (rows), with the first scanline at the top (north) edge of the DEM, and adjacent pixels on each line increasing from left to right (west to east).
The band type is always Float32, even though format versions 4 and 5 physically use 16.16 fixed-point. The driver autoconverts them to floating-point.
dataset::GetGeoTransform() returns a simple world scaling with a centered origin for formats 4 through 6. For version 7, it returns a realworld transform except for rotations. The identity transform is not considered an error condition; Leveller documents often use them.
band::GetUnitType() will report the measurement units used by the file instead of converting unusual types to meters. A list of unit types is in the levellerdataset.cpp module.
band::GetScale() and band::GetOffset() will return the physical-to-logical (i.e., raw to realworld) transform for the elevation data.
band::SetUnitType() can be set to any of the unit types listed in the levellerdataset.cpp module.
dataset::SetGeoTransform() should not include rotation data.
As with the Terragen driver, the MINUSERPIXELVALUE option must be specified. This lets the driver correctly map from logical (realworld) elevations to physical elevations.
Header information is written out on the first call to band::IWriteBlock.
v1.1 (Oct 20/05): Fixed coordsys and elev scaling errors.
Memory datasets should support for most kinds of auxilary information including metadata, coordinate systems, georeferencing, GCPs, color interpretation, nodata, color tables and all pixel data types.
MEM:::option=value[,option=value...]For example:
MEM:::DATAPOINTER=342343408,PIXELS=100,LINES=100,BANDS=3,DATATYPE=Byte, PIXELOFFSET=3,LINEOFFSET=300,BANDOFFSET=1
The MEM format is one of the few that supports the AddBand() method. The AddBand() method supports DATAPOINTER, PIXELOFFSET and LINEOFFSET options to reference an existing memory array.
To select an MFF2 dataset, select the directory containing the attrib, and image_data files for the dataset.
Currently only latitude/longitude and UTM projection are supported (georef.projection.name = ll or georef.projection.name = utm), with the affine transform computed from the lat/long control points. In any event, if GCPs are available in a georef file, they are returned with the dataset.
Newly created files (with a type of MFF2) are always just raw rasters with no georeferencing information. For read, and creation all data types (real, integer and complex in bit depths of 8, 16, 32) should be supported.
IMPORTANT: When creating a new MFF2, be sure to set the projection before setting the geotransform (this is necessary because the geotransform is stored internally as 5 latitude-longitude ground control points, and the projection is needed to do the conversion).
NOTE: Implemented as gdal/frmts/raw/hkvdataset.cpp.
extent.cols = 800 extent.rows = 1040 pixel.size = 32 pixel.encoding = { unsigned twos_complement *ieee_754 } pixel.field = { *real complex } pixel.order = { lsbf *msbf } version = 1.1specifies an image that is 1040 lines by 800 pixels in extent. The pixels are 32 bits of real data in "most significant byte first" (msbf) order, encoded according to the ieee_754 specification. In MFF2, when a value must belong to a certain subset (eg. pixel.order must be either lsbf or msbf), all options are displayed between curly brackets, and the one appropriate for the current file is indicated with a "*".
The file may also contain the following lines indicating the number of channels of data, and how they are interleaved within the binary data file.
channel.enumeration = 1 channel.interleave = { *pixel tile sequential }
top_left.latitude = 32.93333333333334 top_left.longitude = 130.0 top_right.latitude = 32.93333333333334 top_right.longitude = 130.5 bottom_left.latitude = 32.50000000000001 bottom_left.longitude = 130.0 bottom_right.latitude = 32.50000000000001 bottom_right.longitude = 130.5 centre.latitude = 32.71666666666668 centre.longitude = 130.25 projection.origin_longitude = 0 projection.name = ll spheroid.name = wgs-84describes an orthogonal latitude/longitude (ll) projected image, with latitudes and longitudes based on the wgs-84 ellipsoid.
Since MFF2 version 1.1, top_left refers to the top left corner of the top left pixel. top_right refers to the top right corner of the top right pixel. bottom_left refers to the bottom left corner of the bottom left pixel. bottom_right refers to the bottom right corner of the bottom right pixel. centre refers to the centre of the four corners defined above (center of the image).
Mathematically, for an Npix by Nline image, the corners and centre in (pixel,line) coordinates for MFF2 version 1.1 are:
top_left: (0,0) top_right: (Npix,0) bottom_left: (0,Nline) bottom_right: (Npix,Nline) centre: (Npix/2.0,Nline/2.0)These calculations are done using floating point arithmetic (ie. centre coordinates may take on non-integer values).
Note that the corners are always expressed in latitudes/longitudes, even for projected images.
utm- Universal Transverse Mercator projected image. Parameters: spheroid name, projection.origin_longitude (central meridian for the utm projection). The central meridian must be the meridian at the centre of a UTM zone, ie. 3 degrees, 9 degrees, 12 degrees, etc. If this is not specified or set a valid UTM central meridian, the reader should reset the value to the nearest valid central meridian based on the central longitude of the output image. The latitude at the origin of the UTM projection is always 0 degrees.
airy-18304: 6377563.396 299.3249646 modified-airy4: 6377340.189 299.3249646 australian-national4: 6378160 298.25 bessel-1841-namibia4: 6377483.865 299.1528128 bessel-18414: 6377397.155 299.1528128 clarke-18584: 6378294.0 294.297 clarke-18664: 6378206.4 294.9786982 clarke-18804: 6378249.145 293.465 everest-india-18304: 6377276.345 300.8017 everest-sabah-sarawak4:6377298.556 300.8017 everest-india-19564: 6377301.243 300.8017 everest-malaysia-19694:6377295.664 300.8017 everest-malay-sing4: 6377304.063 300.8017 everest-pakistan4: 6377309.613 300.8017 modified-fisher-19604: 6378155 298.3 helmert-19064: 6378200 298.3 hough-19604: 6378270 297 hughes4: 6378273.0 298.279 indonesian-1974: 6378160 298.247 international-1924: 6378388 297 iugc-67: 6378160.0 298.254 iugc-75: 6378140.0 298.25298 krassovsky-1940: 6378245 298.3 kaula: 6378165.0 292.308 grs-80: 6378137 298.257222101 south-american-1969: 6378160 298.25 wgs-72: 6378135 298.26 wgs-84: 6378137 298.257223563 ev-wgs-84: 6378137 298.252841 ev-bessel: 6377397 299.1976073
channel.enumeration: (optional- only needed for multiband) Number of channels of data (eg. 3 for rgb) channel.interleave = { *pixel tile sequential } : (optional- only needed for multiband) For multiband data, indicates how the channels are interleaved. *pixel indicates that data is stored red value, green value, blue value, red value, green value, blue value etc. as opposed to (line of red values) (line of green values) (line of blue values) or (entire red channel) (entire green channel) (entire blue channel) extent.cols: Number of columns of data. extent.rows: Number of rows of data. pixel.encoding = { *unsigned twos-complement ieee-754 }: Combines with pixel.size and pixel.field to give the data type: (encoding, field, size)- type (unsigned, real, 8)- unsigned byte data (unsigned, real, 16)- unsigned int 16 data (unsigned, real, 32)- unsigned int 32 data (twos-complement, real, 16)- signed int 16 data (twos-complement, real, 32)- signed int 32 data (twos-complement, complex, 64)- complex signed int 32 data (ieee-754, real, 32)- real 32 bit floating point data (ieee-754, real, 64)- real 64 bit floating point data (ieee-754, complex, 64)- complex 32 bit floating point data (ieee-754, complex, 128)- complex 64 bit floating point data pixel.size: Size of one pixel of one channel (bits). pixel.field = { *real complex }: Whether the data is real or complex. pixel.order = { *lsbf msbf }: Byte ordering of the data (least or most significant byte first). version: (only in newer versions- if not present, older version is assumed) Version of mff2.
This driver supports reading of MrSID image files using LizardTech's decoding software development kit (DSDK). This DSDK is not free software, you should contact LizardTech to obtain it (see link at end of this page). If you are using GCC, please, ensure that you have the same compiler as was used for DSDK compilation. It is C++ library, so you may get incompatibilities in C++ name mangling between different GCC versions (2.95.x and 3.x).
Latest versions of the DSDK also support decoding JPEG2000 file format, so this driver can be used for JPEG2000 too.
Starting with GDAL 1.9.0, XMP metadata can be extracted from JPEG2000 files, and will be stored as XML raw content in the xml:XMP metadata domain.
This driver provides a way to view MrSID/MG4 compressed LiDAR file as a raster DEM. The specifics of the conversion depend on the desired cellsize, filter criteria, aggregation methods and possibly several other parameters. For this reason, the best way to read a MrSID/MG4 compressed LiDAR file is by referencing it in a View (.view) file, which also parameterizes its raster-conversion. The driver will read an MG4 file directly, however it uses default rasterization parameters that may not produce a desirable output. The contents of the View file are described in the specification MrSID/MG4 LiDAR View Documents.
MrSID/MG4 is a wavelet-based point-cloud compression technology. You may think of it like a LAS file, only smaller and with a built in spatial index. It is developed and distributed by LizardTech. This driver supports reading of MG4 LiDAR files using LizardTech's decoding software development kit (DSDK). This DSDK is freely distributed; but, it is not open source software. You should contact LizardTech to obtain it (see link at end of this page).
The simplest way to view an MG4 file is to wrap it in a View (.view) file like this. Here, the relative reference to the MG4 file means that the file must exist in the same directory as the .view file. Since we're not mapping any bands explicitly, we get the default, which is elevation only. By default, we aggregate based on mean. That is, if two (or more) points land on a single cell, we will expose the average of the two. There's no filtering here so we'll get all the points regardless of classification code or return number. Since the native datatype of elevation is "Float64", that is the datatype of the band we will expose.
<PointCloudView> <InputFile>Tetons.sid</InputFile> </PointCloudView>
This is similar to the example above but we are using the optional ClipBox tag to select a 300 meter North-South swatch through the cloud. If we wanted to crop in the East-West directions, we could have specified that explicitly instead of using NOFITLER for those. Similarly, we could also have cropped in the Z direction as well.
<PointCloudView> <InputFile>Tetons.sid</InputFile> <ClipBox>505500 505800 NOFILTER NOFILTER</ClipBox> </PointCloudView>
Here, we expose a single band (elevation) but we want only those points that have been classified as "Ground". The ClassificationFitler specifies a value of 2 - the ASPRS Point Class code that stipulates "Ground" points. Additionally, instead of the default "Mean" aggregation method, we specify "Max". This means that if two (or more) points land on a single cell, we expose the larger of the two elevation values.
<PointCloudView> <InputFile>E:\ESRIDevSummit2010\Tetons.sid</InputFile> <Band> <!-- Max Bare Earth--> <Channel>Z</Channel> <AggregationMethod>Max</AggregationMethod> <ClassificationFilter>2</ClassificationFilter> </Band> </PointCloudView>
Here we expose an intensity image from the point cloud.
<PointCloudView> <InputFile>Tetons.sid</InputFile> <Band> <!-- All intensities --> <Channel>Intensity</Channel> </Band> </PointCloudView>
Some point cloud images include RGB data. If that's the case, you can use a .view file like this to expose that data.
<PointCloudView> <InputFile>Grass Lake Small.xyzRGB.sid</InputFile> <Band> <Channel>Red</Channel> </Band> <Band> <Channel>Green</Channel> </Band> <Band> <Channel>Blue</Channel> </Band> </PointCloudView>
This driver does not support writing MG4 files.
Only one <InputFile> tag is supported. It must reference an MG4 file.
The only <InterpolationMethod> that is supported is <None> (default). Use this to specify a NODATA value if the default (maximum value of the datatype) is not what you want. See View Specification for details.
There is insufficient error checking for format errors and invalid parameters. Many invalid entries will likely fail silently.
This driver implemets reading support for Meteosat Second Generation files. These are files with names like H-000-MSG1__-MSG1________-HRV______-000007___-200405311115-C_, commonly distributed into a folder structure with dates (e.g. 2004\05\31 for the file mentioned).
The MSG files are wavelet-compressed. A decompression library licensed from EUMETSAT is needed ( Public Wavelet Transform Decompression Library Software, shorter Wavelet Transform Software). The software is compilable on Microsoft Windows, Linux and Solaris Operating Systems, and it works on 32 bits and 64 bits as well as mixed architectures. It is a licensed software and available free to download upon acceptance of the WaveLet Transform Software Licence during electronic registration process.
Part of the source of the file xrithdr_extr.c from XRIT2PIC is used to parse MSG headers. This source is licensed under the terms of the GNU General Public License as published by the Free Software Foundation.
This driver is not "enabled" by default. See
Build Instructions on how to include this driver in your GDAL
library.
Download the Eumetsat library for wavelet decompression. This is a file named PublicDecompWT.zip. Extract the content in a subdirectory with the same name (under frmts/msg).
If you are building with Visual Studio 6.0, extract the .vc makefiles for the PublicDecompWT from the file PublicDecompWTMakefiles.zip.
If you build using the GNUMakefile, use --with-msg option to enable MSG driver:
./configure --with-msg
If find that some adjustments are needed in the makefile and/or the msg source files, please "commit" them. The Eumetsat library promises to be "platform indepentent", but as we are working with Microsoft Windows and Visual Studio 6.0, we did not have the facilities to check if the rest of the msg driver is. Furthermore, apply steps 4 to 7 from the GDAL Driver Implementation Tutorial, section "Adding Driver to GDAL Tree".
MSG Wiki page is available at http://trac.osgeo.org/gdal/wiki/MSG. It's dedicated to document building and usage hints
It is possible to select individual files for opening. In this case, the driver will gather the files that correspond to the other strips of the same image, and correctly compose the image.
Example with gdal_translate.exe:
gdal_translate C:\hrit_a\2004\05\31\H-000-MSG1__-MSG1________-HRV______-000008___-200405311115-C_ C:\output\myimage.tif
It is also possible to use the following syntax for opening the MSG files:
Examples with gdal_translate utility:
Example call to fetch an MSG image of 200501181200 with bands 1, 2 and 3 in IMG format:
gdal_translate -of HFA MSG(\\pc2133-24002\RawData\,200501181200,(1,2,3),N,N,1,1) d:\output\outfile.img
In JPG format, and converting the 10 bits image to 8 bits by dividing all values by 4:
gdal_translate -of JPEG MSG(\\pc2133-24002\RawData\,200501181200,(1,2,3),N,B,1,1) d:\output\outfile.jpg
The same, but reordering the bands in the JPEG image to resemble RGB:
gdal_translate -of JPEG MSG(\\pc2133-24002\RawData\,200501181200,(3,2,1),N,B,1,1) d:\output\outfile.jpg
Geotiff output, only band 2, original 10 bits values:
gdal_translate -of GTiff MSG(\\pc2133-24002\RawData\,200501181200,2,N,N,1,1) d:\output\outfile.tif
Band 12:
gdal_translate -of GTiff MSG(\\pc2133-24002\RawData\,200501181200,12,N,N,1,1) d:\output\outfile.tif
The same band 12 with radiometric calibration in mW/m2/sr/(cm-1)-1:
gdal_translate -of GTiff MSG(\\pc2133-24002\RawData\,200501181200,12,N,R,1,1) d:\output\outfile.tif
Retrieve data from c:\hrit-data\2005\01\18 instead of \\pc2133-24002\RawData\... :
gdal_translate -of GTiff MSG(c:\hrit-data\2005\01\18,200501181200,12,Y,R,1,1) d:\output\outfile.tif
Another option to do the same (note the difference in the Y and the N for the “use_root_folder” parameter:
gdal_translate -of GTiff MSG(c:\hrit-data\,200501181200,12,N,R,1,1) d:\output\outfile.tif
Without radiometric calibration, but for 10 consecutive cycles (thus from 1200 to 1415):
gdal_translate -of GTiff MSG(c:\hrit-data\,200501181200,12,N,N,10,1) d:\output\outfile.tif
10 cycles, but every hour (thus from 1200 to 2100):
gdal_translate -of GTiff MSG(c:\hrit-data\,200501181200,12,N,N,10,4) d:\output\outfile.tif
10 cycles, every hour, and bands 3, 2 and 1:
gdal_translate -of GTiff MSG(c:\hrit-data\,200501181200,(3,2,1),N,N,10,4) d:\output\outfile.tif
The images are using the Geostationary Satellite View projection. Most GIS packages don't recognize this projection (we only know of ILWIS that does have this projection), but gdalwarp.exe can be used to re-project the images.
See Also:
GDAL supports reading only of MSG native files. These files may have anything from 1 to 12 bands, all at 10-bit resolution.
Includes support for the 12th band (HRV - High Resolution Visible). This is implemented as a subset, i.e., it is accessed by prefixing the filename with the tag "HRV:".
Similarly, it is possible to obtain floating point radiance values in stead of the usual 10-bit digital numbers (DNs). This subset is accessed by prefixing the filename with the tag "RAD:".
Georeferencing is currently supported, but the results may not be acceptable (accurate enough), depending on your requirements. The current workaround is to implement the CGMS Geostationary projection directly, using the code available from EUMETSAT.
The fill value metadata or missing_value backward compatibililty is preserved as NODATA value when available.
NOTE: Implemented as gdal/frmts/netcdf/netcdfdataset.cpp.
If the file contains only one netCDF array which appears to be an image, it may be accessed directly, but if the file contains multiple images it may be necessary to import the file via a two step process.
The first step is to get a report of the components images (dataset) in the file using gdalinfo, and then to import the desired images using gdal_translate. The gdalinfo utility lists all multidimensional subdatasets from the input netCDF file.
The name of individual images are assigned to the SUBDATASET_n_NAME metadata item. The description for each image is found in the SUBDATASET_n_DESC metadata item. For netCDF images will follow this format: NETCDF:filename:variable_name
where filename is the name of the input file, and variable_name is the dataset selected withing the file.
On the second step you provide this name for gdalinfo to get information about the dataset or gdal_translate to read dataset.
For example, we want to read data from a netCDF file:
$ gdalinfo sst.nc Driver: netCDF/Network Common Data Format Size is 512, 512 Coordinate System is `' Metadata: NC_GLOBAL#title=IPSL model output prepared for IPCC Fourth Assessment SRES A2 experiment NC_GLOBAL#institution=IPSL (Institut Pierre Simon Laplace, Paris, France) NC_GLOBAL#source=IPSL-CM4_v1 (2003) : atmosphere : LMDZ (IPSL-CM4_IPCC, 96x71x19) ; ocean ORCA2 (ipsl_cm4_v1_8, 2x2L31); sea ice LIM (ipsl_cm4_v NC_GLOBAL#contact=Sebastien Denvil, sebastien.denvil@ipsl.jussieu.fr NC_GLOBAL#project_id=IPCC Fourth Assessment NC_GLOBAL#table_id=Table O1 (13 November 2004) NC_GLOBAL#experiment_id=SRES A2 experiment NC_GLOBAL#realization=1 NC_GLOBAL#cmor_version=9.600000e-01 NC_GLOBAL#Conventions=CF-1.0 NC_GLOBAL#history=YYYY/MM/JJ: data generated; YYYY/MM/JJ+1 data transformed At 16:37:23 on 01/11/2005, CMOR rewrote data to comply with CF standards and IPCC Fourth Assessment requirements NC_GLOBAL#references=Dufresne et al, Journal of Climate, 2015, vol XX, p 136 NC_GLOBAL#comment=Test drive Subdatasets: SUBDATASET_1_NAME=NETCDF:"sst.nc":lon_bnds SUBDATASET_1_DESC=[180x2] lon_bnds (64-bit floating-point) SUBDATASET_2_NAME=NETCDF:"sst.nc":lat_bnds SUBDATASET_2_DESC=[170x2] lat_bnds (64-bit floating-point) SUBDATASET_3_NAME=NETCDF:"sst.nc":time_bnds SUBDATASET_3_DESC=[24x2] time_bnds (64-bit floating-point) SUBDATASET_4_NAME=NETCDF:"sst.nc":tos SUBDATASET_4_DESC=[24x170x180] sea_surface_temperature (32-bit floating-point)Corner Coordinates: Upper Left ( 0.0, 0.0) Lower Left ( 0.0, 512.0) Upper Right ( 512.0, 0.0) Lower Right ( 512.0, 512.0) Center ( 256.0, 256.0)This netCDF files contain 4 datasets, lon_bnds, lat_bnds, tim_bnds and tos. Now select the subdataset, described as:
NETCDF:"sst.nc":tos
[24x17x180] sea_surface_temperature (32-bit floating-point)
and get the information about the number of bands there is inside this variable.
$ gdalinfo NETCDF:"sst.nc":tos Driver: netCDF/Network Common Data Format Size is 180, 170 Coordinate System is `' Origin = (1.000000,-79.500000) Pixel Size = (1.98888889,0.99411765) Metadata: NC_GLOBAL#title=IPSL model output prepared for IPCC Fourth Assessment SRES A2 experiment NC_GLOBAL#institution=IPSL (Institut Pierre Simon Laplace, Paris, France).... More metadata
time#standard_name=time time#long_name=time time#units=days since 2001-1-1 time#axis=T time#calendar=360_day time#bounds=time_bnds time#original_units=seconds since 2001-1-1 Corner Coordinates: Upper Left ( 1.0000000, -79.5000000) Lower Left ( 1.0000000, 89.5000000) Upper Right ( 359.000, -79.500) Lower Right ( 359.000, 89.500) Center ( 180.0000000, 5.0000000) Band 1 Block=180x1 Type=Float32, ColorInterp=Undefined NoData Value=1e+20 Metadata: NETCDF_VARNAME=tos NETCDF_DIMENSION_time=15 NETCDF_time_units=days since 2001-1-1 Band 2 Block=180x1 Type=Float32, ColorInterp=Undefined NoData Value=1e+20 Metadata: NETCDF_VARNAME=tos NETCDF_DIMENSION_time=45 NETCDF_time_units=days since 2001-1-1.... More Bands
Band 22 Block=180x1 Type=Float32, ColorInterp=Undefined NoData Value=1e+20 Metadata: NETCDF_VARNAME=tos NETCDF_DIMENSION_time=645 NETCDF_time_units=days since 2001-1-1 Band 23 Block=180x1 Type=Float32, ColorInterp=Undefined NoData Value=1e+20 Metadata: NETCDF_VARNAME=tos NETCDF_DIMENSION_time=675 NETCDF_time_units=days since 2001-1-1 Band 24 Block=180x1 Type=Float32, ColorInterp=Undefined NoData Value=1e+20 Metadata: NETCDF_VARNAME=tos NETCDF_DIMENSION_time=705 NETCDF_time_units=days since 2001-1-1gdalinfo display the number of bands into this subdataset. There are metadata attached to each band. In this example, the metadata informs us that each band correspond to an array of monthly sea surface temperature from January 2001. There are 24 months of data in this subdataset. You may also use gdal_translate for reading the subdataset.
Note that you should provide exactly the contents of the line marked SUBDATASET_n_NAME to GDAL, including the NETCDF: prefix.
The NETCDF prefix must be first. It triggers the subdataset netCDF driver. This driver is intended only for importing remote sensing and geospatial datasets in form of raster images. If you want explore all data contained in netCDF file you should use another tools.
If those 2 mehtods fail, NetCDF driver will try to read the following metadata directly and set up georefenrencing.
Each band will have metadata tied to it giving a short description of the data it contains.
The translation follow these directives:
$ gdalinfo NETCDF:"sst.nc":tos Driver: netCDF/Network Common Data Format Size is 180, 170 Coordinate System is `' Origin = (1.000000,-79.500000) Pixel Size = (1.98888889,0.99411765) Metadata:NetCDF global attributes
NC_GLOBAL#title=IPSL model output prepared for IPCC Fourth Assessment SRES A2 experimentVariables attributes for: tos, lon, lat and time
tos#standard_name=sea_surface_temperature tos#long_name=Sea Surface Temperature tos#units=K tos#cell_methods=time: mean (interval: 30 minutes) tos#_FillValue=1.000000e+20 tos#missing_value=1.000000e+20 tos#original_name=sosstsst tos#original_units=degC tos#history= At 16:37:23 on 01/11/2005: CMOR altered the data in the following ways: added 2.73150E+02 to yield output units; Cyclical dimension was output starting at a different lon; lon#standard_name=longitude lon#long_name=longitude lon#units=degrees_east lon#axis=X lon#bounds=lon_bnds lon#original_units=degrees_east lat#standard_name=latitude lat#long_name=latitude lat#units=degrees_north lat#axis=Y lat#bounds=lat_bnds lat#original_units=degrees_north time#standard_name=time time#long_name=time time#units=days since 2001-1-1 time#axis=T time#calendar=360_day time#bounds=time_bnds time#original_units=seconds since 2001-1-1
You need to download or compile the netCDF library before configuring GDAL with netCDF support.
Please note that with CygWIN you need to be sure that the DLLs are executable, or GDAL will not execute.
chmod a+rx [NetCDF DLLs]The netCDF DLLs directory must be to be in your PATH.
The read support test has been tested on various products, including CIB and CADRG frames from RPF products, ECRG frames, HRE products.
Color tables for pseudocolored images are read. In some cases nodata values may be identified.
Lat/Long extents are read from the IGEOLO information in the image header if available. If high precision lat/long georeferencing information is available in RPF auxilary data it will be used in preference to the low precision IGEOLO information. In case a BLOCKA instance is found, the higher precision coordinates of BLOCKA are used if the block data covers the complete image - that is the L_LINES field with the row count for that block is equal to the row count of the image. Additionally, all BLOCKA instances are returned as metadata. If GeoSDE TRE are available, they will be used to provide higher precision coordinates.
Most file header and image header fields are returned as dataset level metadata.
In addition to the export oriented CreateCopy() API, it is also possible to create a blank NITF file using Create() and write imagery on demand. However, using this methology writing of pseudocolor tables and georeferencing is not supported unless appropriate IREP and ICORDS creation options are supplied.
Creation Options:
OGDI datasets are opened within GDAL by selecting the GLTP url. For instance, gltp://gdal.velocet.ca/adrg/usr4/mpp1/adrg/TPSUS0101 would open the ADRG dataset stored at /usr4/mpp1/adrg/TPSUS0101 on the machine gdal.velocet.ca (assuming it has an OGDI server running) using the 'adrg' driver. This default access to a whole datastore will attempt to represent all layers (and all supported family types) as bands all at the resolution and region reported by the datastore when initially accessed.
It is also possible to select a particular layer and access family from an OGDI datastore by indicating the layer name family in the name. The GDAL dataset name gltp:/adrg/usr4/mpp1/adrg/TPUS0101:"TPUS0102.IMG":Matrix would select the layer named TPUS0102.IMG from the dataset /usr4/mpp1/adrg/TPUS0101 on the local system using the ADRG driver, and access the Matrix family. When a specific layer is accessed in this manner GDAL will attempt to determine the region and resolution from the OGDI 3.1 capabilities document. Note that OGDI 3.0 datastores must have the layer and family specified in the dataset name since they cannot be determined automatically.
eg. gltp://gdal.velocet.ca/adrg/usr4/mpp1/adrg/TPUS0101 gltp:/adrg/usr4/mpp1/adrg/TPUS0101 gltp:/adrg/usr4/mpp1/adrg/TPUS0101:"TPUS0102.IMG":MatrixOGDI Matrix family layers (pseudocolored integer layers) are represented as a single band of raster data with a color table. Though the Matrix layers contain 32bit integer values, they are represented through GDAL as eight layers. All values over 255 are truncated to 255, and only 256 colormap entries are captured. While this works well for most Matrix layers, it is anticipated that at some point in the future Matrix layers with a larger dynamic range will be represented as other data types.
OGDI Image family layers may internally have a type of RGB (1) which is represented as three 8bit bands in GDAL, or Byte (2), UInt16 (3), Int16 (4) or Int32 (5). There is no support for floating points bands in OGDI 3.1.
The GDAL OGDI driver will represent OGDI datasources as having arbitrary overviews. Any GDAL raster read requests at a reduced resolution will be passed on to the OGDI driver at that reduced resolution; potentially allowing efficient reading of overview information from OGDI datastores.
If an OGDI datastore is opened without selecting a layer name in the dataset name, and if the datastore has OGDI 3.1 style capabilities, the list of layers will be made available as SUBDATASETS metadata. For instance, the gdalinfo command might report the following. This information can be used to establish available layers for direct access.
Subdatasets: SUBDATASET_1_NAME=gltp:/adrg/usr4/mpp1/adrg/TPUS0101:"TPUS0101.IMG":Matrix SUBDATASET_1_DESC=TPUS0101.IMG as Matrix SUBDATASET_2_NAME=gltp:/adrg/usr4/mpp1/adrg/TPUS0101:"TPUS0102.IMG":Matrix SUBDATASET_2_DESC=TPUS0102.IMG as Matrix SUBDATASET_3_NAME=gltp:/adrg/usr4/mpp1/adrg/TPUS0101:"TPUS0101.IMG":Image SUBDATASET_3_DESC=TPUS0101.IMG as Image SUBDATASET_4_NAME=gltp:/adrg/usr4/mpp1/adrg/TPUS0101:"TPUS0102.IMG":Image SUBDATASET_4_DESC=TPUS0102.IMG as ImageSee Also:
GDAL supports reading OZF2/OZFX3 raster datasets.
Either the image file or the .map file can be passed to GDAL. To retrieve georeferencing, you need to specify the .map file.
This driver provides enhanced support for processed PALSAR products from the JAXA PALSAR processor. This encompasses products acquired from the following organizations:
Support is provided for the following features of PALSAR products:
This is a read-only driver.
To open a PALSAR product, select the volume directory file (for example, VOL-ALPSR000000000-P1.5_UA or VOL-ALPSR000000000-P1.1__A). The driver will then use the information contained in the volume directory file to find the various image files (the IMG-* files), as well as the Leader file. Note that the Leader file is essential for correct operation of the driver.
See Also:
Georeferencing is supported though there may be some limitations in support of datums and ellipsoids. GCP segments are ignored. RPC segments will be returned as GDAL style RPC metadata.
Internal overview (pyramid) images will also be correctly read though newly requested overviews will be built externally as an .ovr file.
Vector segments are supported by the OGR PCIDSK driver.
GDAL supports reading Geospatial PDF documents, by extracting georeferencing information and rasterizing the data. Non-geospatial PDF documents will also be recognized by the driver.
GDAL must be compiled with libpoppler support (GPL-licensed), and libpoppler itself must have been configured with --enable-xpdf-headers so that the xpdf C++ headers are available. Note: the poppler C++ API isn't stable, so the driver compilation may fail with too old or too recent poppler versions. Successfully tested versions are poppler >= 0.12.X and <= 0.16.0.
Starting with GDAL 1.9.0, as an alternative, the PDF driver can be compiled against libpodofo (LGPL-licensed) to avoid the libpoppler dependency. This is sufficient to get the georeferencing information. However, for getting the imagery, the pdftoppm utility that comes with the poppler distribution must be available in the system PATH. A temporary file will be generated in a directory determined by the following configuration options : CPL_TMPDIR, TMPDIR or TEMP (in that order). If none are defined, the current directory will be used. Successfully tested versions are libpodofo 0.8.4 and 0.9.1.
The driver supports reading georeferencing encoded in either of the 2 current existing ways : according to the OGC encoding best practice, or according to the Adobe Supplement to ISO 32000.
The dimensions of the raster can be controlled by specifying the DPI of the rasterization with the GDAL_PDF_DPI configuration option. Its default value is 150.
Multipage documents are exposed as subdatasets, one subdataset par page of the document.
The neatline (for OGC best practice) or the bounding box (Adobe style) will be reported as a NEATLINE metadata item, so that it can be later used as a cutline for the warping algorithm.
Starting with GDAL 1.9.0, XMP metadata can be extracted from the file, and will be stored as XML raw content in the xml:XMP metadata domain.
Only a few of the possible Datums available in the OGC best practice spec have been currently mapped in the driver. Unrecognized datums will be considered as being based on the WGS84 ellipsoid.
For documents that contain several neatlines in a page (insets), the georeferencing will be extracted from the inset that has the largest area (in term of screen points).
There is currently no support for selective layer rendering.
PDS files often have the extension .img, sometimes with an associated .lbl (label) file. When a .lbl file exists it should be used as the dataset name rather than the .img file.
In addition to support for most PDS imagery configurations, this driver also reads georeferencing and coordinate system information as well as selected other header metadata.
Implementation of this driver was supported by the United States Geological Survey.
Due to abiguitities in the PDS specification, the georeferencing of some products is subtly or grossly incorrect. There are configuration variables which may be set for these products to correct the interpretation of the georeferencing. Some details are available in ticket #3940.
PDS is part of a family of related formats including ISIS2 and ISIS3.
Rasdaman is a raster database middleware offering an SQL-style query language on multi-dimensional arrays of unlimited size, stored in a relational database. See www.rasdaman.org for the open-source code, documentation, etc. Currently rasdaman is under consideration for OSGeo incubation.
In our driver implementation, GDAL connects to rasdaman by defining a query template which is instantiated with the concrete subsetting box upon every access. This allows to deliver 2-D cutouts from n-D data sets (such as hyperspectral satellite time series, multi-variable climate simulation data, ocean model data, etc.). In particular, virtual imagery can be offered which is derived on demand from ground truth data. Some more technical details are given below.
The connect string syntax follows the WKT Raster pattern and goes
like this:
rasdaman: query='select a[$x_lo:$x_hi,$y_lo:$y_hi] from MyImages as a' [tileXSize=1024] [tileYSize=1024] [host='localhost'] [port=7001] [database='RASBASE'] [user='rasguest'] [password='rasguest']
The rasdaman query language (rasql) string in this case only performs subsetting. Upon image access by GDAL, the $ parameters are substituted by the concrete bounding box computed from the input tile coordinates.
However, the query provided can include any kind of processing, as
long as it returns something 2-D. For example, this determines the
average of red and near-infrared pixels from the oldest image time
series:
query='select ( a.red+a.nir ) /2 [$x_lo:$x_hi,$y_lo:$y_hi, 0 ] from SatStack as a'
Currently there is no support for reading or writing georeferencing information.
See Also:Starting with GDAL 1.7.0, the Rasterlite driver allows reading and creating Rasterlite databases.
Those databases can be produced by the utilities of the
rasterlite distribution, such as rasterlite_load,
rasterlite_pyramids, ....
The driver supports reading grayscale, paletted and RGB images
stored as GIF, PNG, TIFF or JPEG tiles. The driver also supports
reading overviews/pyramids, spatial reference system and spatial
extent.
Wavelet compressed tiles are not supported by default by GDAL, unless the EPSILON driver is compiled.
GDAL/OGR must be compiled with OGR SQLite driver support. For read support, linking against spatialite library is not required, but recent enough sqlite3 library is needed to read rasterlite databases. rasterlite library is not required either.
For write support a new table, linking against spatialite library *is* required.
Although the Rasterlite documentation only mentions GIF, PNG, TIFF, JPEG and WAVELET (EPSILON driver) as compression formats for tiles, the driver supports reading and writing internal tiles in any format handled by GDAL. Furthermore, the Rasterlite driver also allow reading and writing as many bands and as many band types as supported by the driver for the internal tiles.
Syntax: 'rasterlitedb_name' or 'RASTERLITE:rasterlitedb_name[,table=raster_table_prefix][,minx=minx_val,miny=miny_val,maxx=maxx_val,maxy=maxy_val][,level=level_number]
where :
The driver can create a new database if necessary, create a new raster table if necessary and copy a source dataset into the specified raster table.
If data already exists in the raster table, the new data will be added. You can use the WIPE=YES creation options to erase existing data.
The driver does not support updating a block in an existing raster table. It can only append new data.
Syntax for the name of the output dataset: 'RASTERLITE:rasterlitedb_name,table=raster_table_prefix' or 'rasterlitedb_name'
It is possible to specify only the DB name as in the later form, but only if the database does not already exists. In that case, the raster table name will be base on the DB name itself.
WIPE (=NO by default): Set to YES to erase all prexisting data in the specified table
TILED (=YES by default) : Set to NO if the source dataset must be written as a single tile in the raster table
BLOCKXSIZE=n: Sets tile width, defaults to 256.
BLOCKYSIZE=n: Sets tile height, defaults to 256.
DRIVER=[GTiff/GIF/PNG/JPEG/EPSILON/...] : name of the GDAL driver to use for storing tiles. Defaults to GTiff
COMPRESS=[LZW/JPEG/DEFLATE/...] : (GTiff driver) name of the compression method
PHOTOMETRIC=[RGB/YCbCr/...] : (GTiff driver) photometric interpretation
QUALITY : (JPEG-compressed GTiff, JPEG and WEBP drivers) JPEG/WEBP quality 1-100. Defaults to 75
TARGET : (EPSILON driver) target size reduction as a percentage of the original (0-100). Defaults to 96.
FILTER : (EPSILON driver) Filter ID. Defaults to 'daub97lift'.
The driver supports building (if the dataset is opened in update mode) and reading internal overviews.
If no internal overview is detected, the driver will try using external overviews (.ovr files).
$ gdalinfo rasterlitedb.sqlite -noctOutput:
Driver: Rasterlite/Rasterlite Files: rasterlitedb.sqlite Size is 7200, 7200 Coordinate System is: GEOGCS["WGS 84", DATUM["WGS_1984", SPHEROID["WGS 84",6378137,298.257223563, AUTHORITY["EPSG","7030"]], AUTHORITY["EPSG","6326"]], PRIMEM["Greenwich",0, AUTHORITY["EPSG","8901"]], UNIT["degree",0.01745329251994328, AUTHORITY["EPSG","9122"]], AUTHORITY["EPSG","4326"]] Origin = (-5.000000000000000,55.000000000000000) Pixel Size = (0.002083333333333,-0.002083333333333) Metadata: TILE_FORMAT=GIF Image Structure Metadata: INTERLEAVE=PIXEL Corner Coordinates: Upper Left ( -5.0000000, 55.0000000) ( 5d 0'0.00"W, 55d 0'0.00"N) Lower Left ( -5.0000000, 40.0000000) ( 5d 0'0.00"W, 40d 0'0.00"N) Upper Right ( 10.0000000, 55.0000000) ( 10d 0'0.00"E, 55d 0'0.00"N) Lower Right ( 10.0000000, 40.0000000) ( 10d 0'0.00"E, 40d 0'0.00"N) Center ( 2.5000000, 47.5000000) ( 2d30'0.00"E, 47d30'0.00"N) Band 1 Block=480x480 Type=Byte, ColorInterp=Palette Color Table (RGB with 256 entries)
$ gdalinfo multirasterdb.sqliteOutput:
Driver: Rasterlite/Rasterlite Files: Size is 512, 512 Coordinate System is `' Subdatasets: SUBDATASET_1_NAME=RASTERLITE:multirasterdb.sqlite,table=raster1 SUBDATASET_1_DESC=RASTERLITE:multirasterdb.sqlite,table=raster1 SUBDATASET_2_NAME=RASTERLITE:multirasterdb.sqlite,table=raster2 SUBDATASET_2_DESC=RASTERLITE:multirasterdb.sqlite,table=raster2 Corner Coordinates: Upper Left ( 0.0, 0.0) Lower Left ( 0.0, 512.0) Upper Right ( 512.0, 0.0) Lower Right ( 512.0, 512.0) Center ( 256.0, 256.0)
$ gdalinfo RASTERLITE:multirasterdb.sqlite,table=raster1
$ gdal_translate -of Rasterlite source.tif RASTERLITE:my_db.sqlite,table=source -co DRIVER=JPEG
$ gdaladdo RASTERLITE:my_db.sqlite,table=source 2 4 8 16
$ gdaladdo -clean RASTERLITE:my_db.sqlite,table=source
$ gdaladdo -ro RASTERLITE:my_db.sqlite,table=source 2 4 8 16
See Also:
Currently there is no support for reading or writing georeferencing information.
This driver is based on the work done in the TRikPanel project.
NOTE: Implemented as gdal/frmts/rik/rikdataset.cpp.
There are two flavours of RMF called MTW and RSW. MTW supports 16-bit integer and 32/64-bit floating point data in a single channel and aimed to store DEM data. RSW is a general purpose raster, it supports single channel colormapped or three channel RGB images. Only 8-bit data can be stored in RSW. Simple georeferencing can be provided for both image types.
The RadarSat 2 XML products are distributed with a primary XML file called product.xml, and a set of supporting XML data files with the actual imagery stored in TIFF files. The RS2 driver will be used if the product.xml or the containing directory is selected, and it can treat all the imagery as one consistent dataset.
The complex products use "32bit void typed" TIFF files which are not meaningfully readable normally. The RS2 driver takes care of converting this into useful CInt16 format internally.
The RS2 driver also reads geolocation tiepoints from the product.xml file and represents them as GCPs on the dataset.
It is very likely that RadarSat International will be distributing other sorts of datasets in this format; however, at this time this driver only supports specific RadarSat 2 polarimetric products. All other will be ignored, or result in various runtime errors. It is hoped that this driver can be generalized when other product samples become available.
Note that geocoded (SPG/SSG) products do not have this functionality available. Also be aware that the LUTs must be in the product directory where specified in the product.xml, otherwise loading the product with the calibration LUT applied will fail.
One caveat worth noting is that the RADARSAT-2 driver will supply the calibrated data as GDT_Float32 or GDT_CFloat32 depending on the type of calibration selected. The uncalibrated data is provided as GDT_Int16/GDT_Byte/GDT_CInt16, also depending on the type of product selected.
See Also:
ESRI ArcSDE provides an abstraction layer over numerous databases that allows the storage of raster data. ArcSDE supports n-band imagery at many bit depths, and the current implementation of the GDAL driver should support as many bands as you can throw at it. ArcSDE supports the storage of LZW, JP2K, and uncompressed data and transparently presents this through its C API SDK.
The current driver supports the following features:
The current driver does not support the following:
The ArcSDE raster driver currently only supports block read methods. Each call to this method results in a request for a block of raster data for each band of data in the raster, and single-pass requests for all of the bands for a block or given area is not currently done. This approach consequently results in extra network overhead. It is hoped that the driver will be improved to support single-pass reads in the near future.
The ArcSDE raster driver should only consume a single ArcSDE connection throughout the lifespan of the dataset. Each connection to the database has an overhead of approximately 2 seconds, with additional overhead that is taken for calculating dataset information. Therefore, usage of the driver in situations where there is a lot of opening and closing of datasets is not expected to be very performant.
Although the ArcSDE C SDK does support threading and locking, the GDAL ArcSDE raster driver does not utilize this capability. Therefore, the ArcSDE raster driver should be considered not threadsafe, and sharing datasets between threads will have undefined (and often disasterous) results.
SDE datasets are specified with the following information:
SDE:sdemachine.iastate.edu,5151,database,username,password,fully.specified.tablename,RASTER
band::GetUnitType() returns meters.
Elevations are Int16. You must use the band::GetScale() and band::GetOffset() to convert them to meters.
Call dataset::SetProjection() and dataset::SetGeoTransform() with your coordinate system details. Otherwise, the driver will not encode physical elevations properly. Geographic (degree-based) coordinate systems will be converted to a local meter-based system.
To maintain precision, a best-fit baseheight and scaling will be used to use as much of the 16-bit range as possible.
Elevations are Float32.
The 7.5 minute (UTM grid) USGS DEM files will generally have regions of missing data around the edges, and these are properly marked with a nodata value. Elevation values in USGS DEM files may be in meters or feet, and this will be indicated by the return value of GDALRasterBand::GetUnitType() (either "m" or "ft").
Note that USGS DEM files are represented as one big tile. This may cause cache thrashing problems if the GDAL tile cache size is small. It will also result in a substantial delay when the first pixel is read as the whole file will be ingested.
Some of the code for implementing usgsdemdataset.cpp was derived from VTP code by Ben Discoe. See the Virtual Terrain project for more information on VTP.
Input data must already be sampled in a geographic or UTM coordinate system. By default the entire area of the input file will be output, but for CDED50K products the output file will be sampled at the production specified resolution and on product tile boundaries.
If the input file has appropriate coordinate system information set, export to specific product formats can take input in different coordinate systems (ie. from Albers projection to NAD83 geographic for CDED50K production).
Creation Options:
gdal_translate -of USGSDEM -co PRODUCT=CDED50K -co TEMPLATE=yk_template.dem \ -co TOPLEFT=-117w,60n yk_3arcsec 031a01_e.dem
The USGS DEM reading code in GDAL was derived from the importer in the VTP software. The export capability was developed with the financial support of the Yukon Department of Environment.
An example of a simple .vrt file referring to a 512x512 dataset with one band loaded from utm.tif might look like this:
<VRTDataset rasterXSize= "512" rasterYSize= "512"> <GeoTransform>440720.0, 60.0, 0.0, 3751320.0, 0.0, -60.0</GeoTransform> <VRTRasterBand dataType= "Byte" band= "1"> <ColorInterp>Gray</ColorInterp> <SimpleSource> <SourceFilename relativeToVRT= "1">utm.tif</SourceFilename> <SourceBand>1</SourceBand> <SrcRect xOff= "0" yOff= "0" xSize= "512" ySize= "512"/> <DstRect xOff= "0" yOff= "0" xSize= "512" ySize= "512"/> </SimpleSource> </VRTRasterBand> </VRTDataset>
Many aspects of the VRT file are a direct XML encoding of the GDAL Data Model which should be reviewed for understanding of the semantics of various elements.
VRT files can be produced by translating to VRT format. The resulting file can then be edited to modify mappings, add metadata or other purposes. VRT files can also be produced programmatically by various means.
This tutorial will cover the .vrt file format (suitable for users editing .vrt files), and how .vrt files may be created and manipulated programmatically for developers.
VRTDataset: This is the root element for the whole GDAL dataset. It must have the attributes rasterXSize and rasterYSize describing the width and height of the dataset in pixels. It may have SRS, GeoTransform, GCPList, Metadata, MaskBand and VRTRasterBand subelements.
<VRTDataset rasterXSize= "512" rasterYSize= "512">
The allowed subelements for VRTDataset are :
<SRS>PROJCS[NAD27 / UTM zone 11N,GEOGCS[NAD27,DATUM[North_American_Datum_1927, SPHEROID[Clarke 1866,6378206.4,294.9786982139006, AUTHORITY[EPSG,7008]], AUTHORITY[EPSG,6267]], PRIMEM[Greenwich,0],UNIT[degree,0.0174532925199433], AUTHORITY[EPSG,4267]],PROJECTION[Transverse_Mercator], PARAMETER[latitude_of_origin,0],PARAMETER[central_meridian,-117], PARAMETER[scale_factor,0.9996],PARAMETER[false_easting,500000], PARAMETER[false_northing,0],UNIT[metre,1, AUTHORITY[EPSG,9001]],AUTHORITY[EPSG,26711]]</SRS>
<GeoTransform>440720.0, 60, 0.0, 3751320.0, 0.0, -60.0</GeoTransform>
<Metadata> <MDI key= "md_key">Metadata value</MDI> </Metadata>
<MaskBand> <VRTRasterBand dataType= "Byte"> <SimpleSource> <SourceFilename relativeToVRT= "1">utm.tif</SourceFilename> <SourceBand>mask,1</SourceBand> <SrcRect xOff= "0" yOff= "0" xSize= "512" ySize= "512"/> <DstRect xOff= "0" yOff= "0" xSize= "512" ySize= "512"/> </SimpleSource> </VRTRasterBand> </MaskBand>
The allowed subelements for VRTRasterBand are :
<ColorInterp>Gray</ColorInterp>:
<NoDataValue>-100.0</NoDataValue>
Default value is 0 when this element is absent.
<HideNoDataValue>1</HideNoDataValue>
<ColorTable> <Entry c1= "0" c2= "0" c3= "0" c4= "255"/> <Entry c1= "145" c2= "78" c3= "224" c4= "255"/> </ColorTable>
<Description>Crop Classification Layer</Description>
<UnitType>ft</UnitType>
<Offset>0.0</Offset>
<Scale>0.0</Scale>
<Overview> <SourceFilename relativeToVRT= "1">yellowstone_2.1.ntf.r2</SourceFilename> <SourceBand>1</SourceBand> </Overview>
<CategoryNames> <Category>Missing</Category> <Category>Non-Crop</Category> <Category>Wheat</Category> <Category>Corn</Category> <Category>Soybeans</Category> </CategoryNames>
The relativeToVRT attribute on the SourceFilename indicates whether the filename should be interpreted as relative to the .vrt file (value is 1) or not relative to the .vrt file (value is 0). The default is 0.
Some characteristics of the source band can be specified in the optional SourceProperties tag to enable the VRT driver to differ the opening of the source dataset until it really needs to read data from it. This is particularly useful when building VRTs with a big number of source datasets. The needed parameters are the raster dimensions, the size of the blocks and the data type. If the SourceProperties tag is not present, the source dataset will be opened at the same time as the VRT itself.
Starting with GDAL 1.8.0, the content of the SourceBand subelement can refer to a mask band. For example mask,1 means the the mask band of the first band of the source.
<SimpleSource> <SourceFilename relativeToVRT= "1">utm.tif</SourceFilename> <SourceBand>1</SourceBand> <SourceProperties RasterXSize= "512" RasterYSize= "512" DataType= "Byte" BlockXSize= "128" BlockYSize= "128"/> <SrcRect xOff= "0" yOff= "0" xSize= "512" ySize= "512"/> <DstRect xOff= "0" yOff= "0" xSize= "512" ySize= "512"/> </SimpleSource>
The ComplexSource supports adding a custom lookup table to transform the source values to the destination. The LUT can be specified using the following form:
<LUT>[src value 1]:[dest value 1],[src value 2]:[dest value 2],...</LUT>
The intermediary values are calculated using a linear interpolation between the bounding destination values of the corresponding range.
The ComplexSource supports fetching a color component from a source raster band that has a color table. The ColorTableComponent value is the index of the color component to extract : 1 for the red band, 2 for the green band, 3 for the blue band or 4 for the alpha band.
When transforming the source values the operations are executed in the following order:
<ComplexSource> <SourceFilename relativeToVRT= "1">utm.tif</SourceFilename> <SourceBand>1</SourceBand> <ScaleOffset>0</ScaleOffset> <ScaleRatio>1</ScaleRatio> <ColorTableComponent>1</ColorTableComponent> <LUT>0:0,2345.12:64,56789.5:128,2364753.02:255</LUT> <NODATA>0</NODATA> <SrcRect xOff= "0" yOff= "0" xSize= "512" ySize= "512"/> <DstRect xOff= "0" yOff= "0" xSize= "512" ySize= "512"/> </ComplexSource>
<KernelFilteredSource> <SourceFilename>/debian/home/warmerda/openev/utm.tif</SourceFilename> <SourceBand>1</SourceBand> <Kernel normalized= "1"> <Size>3</Size> <Coefs>0.11111111 0.11111111 0.11111111 0.11111111 0.11111111 0.11111111 0.11111111 0.11111111 0.11111111</Coefs> </Kernel> </KernelFilteredSource>
For example, the following .vrt describes a raw raster file containing floating point complex pixels in a file called l2p3hhsso.img. The image data starts from the first byte (ImageOffset=0). The byte offset between pixels is 8 (PixelOffset=8), the size of a CFloat32. The byte offset from the start of one line to the start of the next is 9376 bytes (LineOffset=9376) which is the width (1172) times the size of a pixel (8).
<VRTDataset rasterXSize= "1172" rasterYSize= "1864"> <VRTRasterBand dataType= "CFloat32" band= "1" subClass= "VRTRawRasterBand"> <SourceFilename relativetoVRT= "1">l2p3hhsso.img</SourceFilename> <ImageOffset>0</ImageOffset> <PixelOffset>8</PixelOffset> <LineOffset>9376</LineOffset> <ByteOrder>MSB</ByteOrder> </VRTRasterBand> </VRTDataset>
Some things to note are that the VRTRasterBand has a subClass specifier of "VRTRawRasterBand". Also, the VRTRawRasterBand contains a number of previously unseen elements but no "source" information. VRTRawRasterBands may never have sources (ie. SimpleSource), but should contain the following elements in addition to all the normal "metadata" elements previously described which are still supported.
A few other notes:
Another example, in this case a 400x300 RGB pixel interleaved image.
<VRTDataset rasterXSize= "400" rasterYSize= "300"> <VRTRasterBand dataType= "Byte" band= "1" subClass= "VRTRawRasterBand"> <ColorInterp>Red</ColorInterp> <SourceFilename relativetoVRT= "1">rgb.raw</SourceFilename> <ImageOffset>0</ImageOffset> <PixelOffset>3</PixelOffset> <LineOffset>1200</LineOffset> </VRTRasterBand> <VRTRasterBand dataType= "Byte" band= "2" subClass= "VRTRawRasterBand"> <ColorInterp>Green</ColorInterp> <SourceFilename relativetoVRT= "1">rgb.raw</SourceFilename> <ImageOffset>1</ImageOffset> <PixelOffset>3</PixelOffset> <LineOffset>1200</LineOffset> </VRTRasterBand> <VRTRasterBand dataType= "Byte" band= "3" subClass= "VRTRawRasterBand"> <ColorInterp>Blue</ColorInterp> <SourceFilename relativetoVRT= "1">rgb.raw</SourceFilename> <ImageOffset>2</ImageOffset> <PixelOffset>3</PixelOffset> <LineOffset>1200</LineOffset> </VRTRasterBand> </VRTDataset>
To create a VRT dataset that is a clone of an existing dataset use the CreateCopy() method. For example to clone utm.tif into a wrk.vrt file in C++ the following could be used:
GDALDriver *poDriver = (GDALDriver *) GDALGetDriverByName( "VRT" ); GDALDataset *poSrcDS, *poVRTDS; poSrcDS = (GDALDataset *) GDALOpenShared( "utm.tif", GA_ReadOnly ); poVRTDS = poDriver->CreateCopy( "wrk.vrt", poSrcDS, FALSE, NULL, NULL, NULL ); GDALClose((GDALDatasetH) poVRTDS); GDALClose((GDALDatasetH) poSrcDS);
Note the use of GDALOpenShared() when opening the source dataset. It is advised to use GDALOpenShared() in this situation so that you are able to release the explicit reference to it before closing the VRT dataset itself. In other words, in the previous example, you could also invert the 2 last lines, whereas if you open the source dataset with GDALOpen(), you'd need to close the VRT dataset before closing the source dataset.
To create a virtual copy of a dataset with some attributes added or changed such as metadata or coordinate system that are often hard to change on other formats, you might do the following. In this case, the virtual dataset is created "in memory" only by virtual of creating it with an empty filename, and then used as a modified source to pass to a CreateCopy() written out in TIFF format.
poVRTDS = poDriver->CreateCopy( "", poSrcDS, FALSE, NULL, NULL, NULL ); poVRTDS->SetMetadataItem( "SourceAgency", "United States Geological Survey"); poVRTDS->SetMetadataItem( "SourceDate", "July 21, 2003" ); poVRTDS->GetRasterBand( 1 )->SetNoDataValue( -999.0 ); GDALDriver *poTIFFDriver = (GDALDriver *) GDALGetDriverByName( "GTiff" ); GDALDataset *poTiffDS; poTiffDS = poTIFFDriver->CreateCopy( "wrk.tif", poVRTDS, FALSE, NULL, NULL, NULL ); GDALClose((GDALDatasetH) poTiffDS);
In the above example the nodata value is set as -999. You can set the HideNoDataValue element in the VRT dataset's band using SetMetadataItem() on that band.
poVRTDS->GetRasterBand( 1 )->SetMetadataItem( "HideNoDataValue" , "1" );
In this example a virtual dataset is created with the Create() method, and adding bands and sources programmatically, but still via the "generic" API. A special attribute of VRT datasets is that sources can be added to the VRTRasterBand (but not to VRTRawRasterBand) by passing the XML describing the source into SetMetadata() on the special domain target "new_vrt_sources". The domain target "vrt_sources" may also be used, in which case any existing sources will be discarded before adding the new ones. In this example we construct a simple averaging filter source instead of using the simple source.
// construct XML for simple 3x3 average filter kernel source. const char *pszFilterSourceXML = "<KernelFilteredSource>" " <SourceFilename>utm.tif</SourceFilename><SourceBand>1</SourceBand>" " <Kernel>" " <Size>3</Size>" " <Coefs>0.111 0.111 0.111 0.111 0.111 0.111 0.111 0.111 0.111</Coefs>" " </Kernel>" "</KernelFilteredSource>"; // Create the virtual dataset. poVRTDS = poDriver->Create( "", 512, 512, 1, GDT_Byte, NULL ); poVRTDS->GetRasterBand(1)->SetMetadataItem( "source_0",pszFilterSourceXML, "new_vrt_sources");
A more general form of this that will produce a 3x3 average filtered clone of any input datasource might look like the following. In this case we deliberately set the filtered datasource as in the "vrt_sources" domain to override the SimpleSource created by the CreateCopy() method. The fact that we used CreateCopy() ensures that all the other metadata, georeferencing and so forth is preserved from the source dataset ... the only thing we are changing is the data source for each band.
int nBand; GDALDriver *poDriver = (GDALDriver *) GDALGetDriverByName( "VRT" ); GDALDataset *poSrcDS, *poVRTDS; poSrcDS = (GDALDataset *) GDALOpenShared( pszSourceFilename, GA_ReadOnly ); poVRTDS = poDriver->CreateCopy( "", poSrcDS, FALSE, NULL, NULL, NULL ); for( nBand = 1; nBand <= poVRTDS->GetRasterCount(); nBand++ ) { char szFilterSourceXML[10000]; GDALRasterBand *poBand = poVRTDS->GetRasterBand( nBand ); sprintf( szFilterSourceXML, "<KernelFilteredSource>" " <SourceFilename>%s</SourceFilename><SourceBand>%d</SourceBand>" " <Kernel>" " <Size>3</Size>" " <Coefs>0.111 0.111 0.111 0.111 0.111 0.111 0.111 0.111 0.111</Coefs>" " </Kernel>" "</KernelFilteredSource>", pszSourceFilename, nBand ); poBand->SetMetadataItem( "source_0", szFilterSourceXML, "vrt_sources" ); }
The VRTDataset class is one of the few dataset implementations that supports the AddBand() method. The options passed to the AddBand() method can be used to control the type of the band created (VRTRasterBand, VRTRawRasterBand, VRTDerivedRasterBand), and in the case of the VRTRawRasterBand to set its various parameters. For standard VRTRasterBand, sources should be specified with the above SetMetadata() / SetMetadataItem() examples.
GDALDriver *poDriver = (GDALDriver *) GDALGetDriverByName( "VRT" ); GDALDataset *poVRTDS; poVRTDS = poDriver->Create( "out.vrt", 512, 512, 0, GDT_Byte, NULL ); char** papszOptions = NULL; papszOptions = CSLAddNameValue(papszOptions, "subclass", "VRTRawRasterBand"); // if not specified, default to VRTRasterBand papszOptions = CSLAddNameValue(papszOptions, "SourceFilename", "src.tif"); // mandatory papszOptions = CSLAddNameValue(papszOptions, "ImageOffset", "156"); // optionnal. default = 0 papszOptions = CSLAddNameValue(papszOptions, "PixelOffset", "2"); // optionnal. default = size of band type papszOptions = CSLAddNameValue(papszOptions, "LineOffset", "1024"); // optionnal. default = size of band type * width papszOptions = CSLAddNameValue(papszOptions, "ByteOrder", "LSB"); // optionnal. default = machine order papszOptions = CSLAddNameValue(papszOptions, "RelativeToVRT", "true"); // optionnal. default = false poVRTDS->AddBand(GDT_Byte, papszOptions); CSLDestroy(papszOptions); delete poVRTDS;
A specialized type of band is a 'derived' band which derives its pixel information from its source bands. With this type of band you must also specify a pixel function, which has the responsibility of generating the output raster. Pixel functions are created by an application and then registered with GDAL using a unique key.
Using derived bands you can create VRT datasets that manipulate bands on the fly without having to create new band files on disk. For example, you might want to generate a band using four source bands from a nine band input dataset (x0, x3, x4, and x8):
band_value = sqrt((x3*x3+x4*x4)/(x0*x8));
You could write the pixel function to compute this value and then register it with GDAL with the name "MyFirstFunction". Then, the following VRT XML could be used to display this derived band:
<VRTDataset rasterXSize= "1000" rasterYSize= "1000"> <VRTRasterBand dataType= "Float32" band= "1" subClass= "VRTDerivedRasterBand"> <Description>Magnitude</Description> <PixelFunctionType>MyFirstFunction</PixelFunctionType> <SimpleSource> <SourceFilename relativeToVRT= "1">nine_band.dat</SourceFilename> <SourceBand>1</SourceBand> <SrcRect xOff= "0" yOff= "0" xSize= "1000" ySize= "1000"/> <DstRect xOff= "0" yOff= "0" xSize= "1000" ySize= "1000"/> </SimpleSource> <SimpleSource> <SourceFilename relativeToVRT= "1">nine_band.dat</SourceFilename> <SourceBand>4</SourceBand> <SrcRect xOff= "0" yOff= "0" xSize= "1000" ySize= "1000"/> <DstRect xOff= "0" yOff= "0" xSize= "1000" ySize= "1000"/> </SimpleSource> <SimpleSource> <SourceFilename relativeToVRT= "1">nine_band.dat</SourceFilename> <SourceBand>5</SourceBand> <SrcRect xOff= "0" yOff= "0" xSize= "1000" ySize= "1000"/> <DstRect xOff= "0" yOff= "0" xSize= "1000" ySize= "1000"/> </SimpleSource> <SimpleSource> <SourceFilename relativeToVRT= "1">nine_band.dat</SourceFilename> <SourceBand>9</SourceBand> <SrcRect xOff= "0" yOff= "0" xSize= "1000" ySize= "1000"/> <DstRect xOff= "0" yOff= "0" xSize= "1000" ySize= "1000"/> </SimpleSource> </VRTRasterBand> </VRTDataset>
In addition to the subclass specification (VRTDerivedRasterBand) and the PixelFunctionType value, there is another new parameter that can come in handy: SourceTransferType. Typically the source rasters are obtained using the data type of the derived band. There might be times, however, when you want the pixel function to have access to higher resolution source data than the data type being generated. For example, you might have a derived band of type "Float", which takes a single source of type "CFloat32" or "CFloat64", and returns the imaginary portion. To accomplish this, set the SourceTransferType to "CFloat64". Otherwise the source would be converted to "Float" prior to calling the pixel function, and the imaginary portion would be lost.
<VRTDataset rasterXSize= "1000" rasterYSize= "1000"> <VRTRasterBand dataType= "Float32" band= "1" subClass= "VRTDerivedRasterBand"> <Description>Magnitude</Description> <PixelFunctionType>MyFirstFunction</PixelFunctionType> <SourceTransferType>CFloat64</SourceTransferType> ...
To register this function with GDAL (prior to accessing any VRT datasets with derived bands that use this function), an application calls GDALAddDerivedBandPixelFunc with a key and a GDALDerivedPixelFunc:
GDALAddDerivedBandPixelFunc( "MyFirstFunction", TestFunction);
A good time to do this is at the beginning of an application when the GDAL drivers are registered.
GDALDerivedPixelFunc is defined with a signature similar to IRasterIO:
papoSources | A pointer to packed rasters; one per source. The datatype of all will be the same, specified in the eSrcType parameter. | |
nSources | The number of source rasters. | |
pData | The buffer into which the data should be read, or from which it should be written. This buffer must contain at least nBufXSize * nBufYSize words of type eBufType. It is organized in left to right, top to bottom pixel order. Spacing is controlled by the nPixelSpace, and nLineSpace parameters. | |
nBufXSize | The width of the buffer image into which the desired region is to be read, or from which it is to be written. | |
nBufYSize | The height of the buffer image into which the desired region is to be read, or from which it is to be written. | |
eSrcType | The type of the pixel values in the papoSources raster array. | |
eBufType | The type of the pixel values that the pixel function must generate in the pData data buffer. | |
nPixelSpace | The byte offset from the start of one pixel value in pData to the start of the next pixel value within a scanline. If defaulted (0) the size of the datatype eBufType is used. | |
nLineSpace | The byte offset from the start of one scanline in pData to the start of the next. |
typedef CPLErr (*GDALDerivedPixelFunc)( void **papoSources, int nSources, void *pData, int nXSize, int nYSize, GDALDataType eSrcType, GDALDataType eBufType, int nPixelSpace, int nLineSpace);
The following is an implementation of the pixel function:
#include "gdal.h" CPLErr TestFunction( void **papoSources, int nSources, void *pData, int nXSize, int nYSize, GDALDataType eSrcType, GDALDataType eBufType, int nPixelSpace, int nLineSpace) { int ii, iLine, iCol; double pix_val; double x0, x3, x4, x8; // ---- Init ---- if (nSources != 4) return CE_Failure; // ---- Set pixels ---- for( iLine = 0; iLine < nYSize; iLine++ ) { for( iCol = 0; iCol < nXSize; iCol++ ) { ii = iLine * nXSize + iCol; /* Source raster pixels may be obtained with SRCVAL macro */ x0 = SRCVAL(papoSources[0], eSrcType, ii); x3 = SRCVAL(papoSources[1], eSrcType, ii); x4 = SRCVAL(papoSources[2], eSrcType, ii); x8 = SRCVAL(papoSources[3], eSrcType, ii); pix_val = sqrt((x3*x3+x4*x4)/(x0*x8)); GDALCopyWords(pix_val, GDT_Float64, 0, ((GByte *)pData) + nLineSpace * iLine + iCol * nPixelSpace, eBufType, nPixelSpace, 1); } } // ---- Return success ---- return CE_None; }
Supported for read and write access, including reading of an affine georeferencing transform and some projections. This format is the ASCII interchange format for Arc/Info Grid, and takes the form of an ASCII file, plus sometimes an associated .prj file. It is normally produced with the Arc/Info ASCIIGRID command.
The projections support (read if a *.prj file is available) is quite limited. Additional sample .prj files may be sent to the maintainer, warmerdam@pobox.com.
The NODATA value for the grid read is also preserved when available.
By default, the datatype returned for AAIGRID datasets by GDAL is autodetected, and set to Float32 for grid with floating point values or Int32 otherwise. This is done by analysing the format of the NODATA value and the first 100k bytes of data of the grid. From GDAL 1.8.0, you can explictely specify the datatype by setting the AAIGRID_DATATYPE configuration option (Int32, Float32 and Float64 values are supported currently)
If pixels being written are not square (the width and height of a pixel in georeferenced units differ) then DX and DY parameters will be output instead of CELLSIZE. Such files can be used in Golden Surfer, but not most other ascii grid reading programs. For force the X pixel size to be used as CELLSIZE use the FORCE_CELLSIZE=YES creation option or resample the input to have square pixels.
When writing floating-point values, the driver uses the "%6.20g" format pattern as a default. You can consult a reference manual for printf to have an idea of the exact behaviour of this ;-). You can alternatively specify the number of decimal places with the DECIMAL_PRECISION creation option. For example, DECIMAL_PRECISION=3 will output numbers with 3 decimal places.
The AIG driver is also available for Arc/Info Binary Grid format.
NOTE: Implemented as gdal/frmts/aaigrid/aaigriddataset.cpp.
(GDAL >= 1.9.0)
This is a conveniency driver to read ACE2 DEMs. Those files contain raw binary data. The georeferencing is entirely determined by the filename. Quality, source and confidence layers are of Int16 type, whereas elevation data is returned as Float32.
NOTE: Implemented as gdal/frmts/raw/ace2dataset.cpp.
Supported by GDAL for read access. Creation is possible, but it must be considered as experimental and a means of testing read access (although files created by the driver can be read successfully on another GIS software)
An ADRG dataset is made of several files. The file recognised by GDAL is the General Information File (.GEN). GDAL will also need the image file (.IMG), where the actual data is.
The Transmission Header File (.THF) can also be used as an input to GDAL. If the THF references more than one image, GDAL will report the images it is composed of as subdatasets. If the THF references just one image, GDAL will open it directly.
Overviews, legends and insets are not used. Polar zones (ARC zone 9 and 18) are not supported (due to the lack of test data).
This is an alternative to using the OGDI Bridge for ADRG datasets.
See also : the ADRG specification (MIL-A-89007)
Supported by GDAL for read access. This format is the internal binary format for Arc/Info Grid, and takes the form of a coverage level directory in an Arc/Info database. To open the coverage select the coverage directory, or an .adf file (such as hdr.adf) from within it. If the directory does not contain file(s) with names like w001001.adf then it is not a grid coverage.
Support includes reading of an affine georeferencing transform, some projections, and a color table (.clr) if available.
This driver is implemented based on a reverse engineering of the format. See the format description for more details.
The projections support (read if a prj.adf file is available) is quite limited. Additional sample prj.adf files may be sent to the maintainer, warmerdam@pobox.com .
NOTE: Implemented as gdal/frmts/aigrid/aigdataset.cpp.
BSB Nautical Chart format is supported for read access, including reading the colour table and the reference points (as GCPs). Note that the .BSB files cannot be selected directly. Instead select the .KAP files. Versions 1.1, 2.0 and 3.0 have been tested successfully.
This driver should also support GEO/NOS format as supplied by Softchart. These files normally have the extension .nos with associated .geo files containing georeferencing ... the .geo files are currently ignored.
This driver is based on work by Mike Higgins. See the frmts/bsb/bsb_read.c files for details on patents affecting BSB format.
Starting with GDAL 1.6.0, it is possible to select an alternate color palette via the BSB_PALETTE configuration option. The default value is RGB. Other common values that can be found are : DAY, DSK, NGT, NGR, GRY, PRC, PRG...
NOTE: Implemented as gdal/frmts/bsb/bsbdataset.cpp.
The .bt format is used for elevation data in the VTP software. The driver includes support for reading and writing .bt 1.3 format including support for Int16, Int32 and Float32 pixel data types.
The driver does not support reading or writting gzipped (.bt.gz) .bt files even though this is supported by the VTP software. Please unpack the files before using with GDAL using the "gzip -d file.bt.gz".
Projections in external .prj files are read and written, and support for most internally defined coordinate systems is also available.
Read/write imagery access with the GDAL .bt driver is terribly slow due to a very inefficient access strategy to this column oriented data. This could be corrected, but it would be a fair effort.
NOTE: Implemented as gdal/frmts/raw/btdataset.cpp.
See Also: The BT file format is defined on the VTP web site.
This is a simple, read-only reader for ceos image files. To use, select the main imagery file. This driver reads only the image data, and does not capture any metadata, or georeferencing.
This driver is known to work with CEOS data produced by Spot Image, but will have problems with many other data sources. In particular, it will only work with eight bit unsigned data.
See the separate SAR_CEOS driver for access to SAR CEOS data products.
NOTE: Implemented as gdal/frmts/ceos/ceosdataset.cpp.
(GDAL >= 1.9.0)
This driver can read USGS Land Use and Land Cover (LULC) grids encoded in the Character Composite Theme Grid (CTG) format. Each file is reported as a 6-band dataset of type Int32. The meaning of each band is the following one :
Those files are typically named grid_cell.gz, grid_cell1.gz or grid_cell2.gz on the USGS site.
NOTE: Implemented as gdal/frmts/ctg/ctgdataset.cpp.
This is a read-only read for Spot DIMAP described images. To use, select the METADATA.DIM file in a product directory, or the product directory itself.
The imagery is in a distinct imagery file, often a TIFF file, but the DIMAP dataset handles accessing that file, and attaches geolocation and other metadata to the dataset from the metadata xml file.
From GDAL 1.6.0, the content of the <Spectral_Band_Info> node is reported as metadata at the level of the raster band. Note that the content of the Spectral_Band_Info of the first band is still reported as metadata of the dataset, but this should be considered as a deprecated way of getting this information.
NOTE: Implemented as gdal/frmts/dimap/dimapdataset.cpp.
Support for read access to DODS/OPeNDAP servers. Pass the DODS/OPeNDAP URL to the driver as you would when accessing a local file. The URL specifies the remote server, data set and raster within the data set. In addition, you must tell the driver which dimensions are to be interpreted as distinct bands as well as which correspond to Latitude and Longitude. See the file README.DODS for more detailed information.
Support for read access, including reading of an affine georeferencing transform, and capture of the projection string. This format is the old, unlabelled DOQ (Digital Ortho Quad) format from the USGS.
NOTE: Implemented as gdal/frmts/raw/doq1dataset.cpp.
Support for read access, including reading of an affine georeferencing transform, capture of the projection string and reading of other auxilary fields as metadata. This format is the new, labelled DOQ (Digital Ortho Quad) format from the USGS.
This driver was implemented by Derrick J Brashear.
NOTE: Implemented as gdal/frmts/raw/doq2dataset.cpp.
See Also: USGS DOQ Standards
(GDAL >= 1.9.0)
GDAL supports reading DEMs/rasters exported as E00 Grids.
The driver has been tested against datasets such as the one available on ftp://msdis.missouri.edu/pub/dem/24k/county/
NOTE: Implemented as gdal/frmts/e00grid/e00griddataset.cpp.
GDAL supports reading and writing the ESRI .hdr labelling format, often referred to as ESRI BIL format. Eight, sixteen and thirty-two bit integer raster data types are supported as well as 32 bit floating point. Coordinate systems (from a .prj file), and georeferencing are supported. Unrecognised options in the .hdr file are ignored. To open a dataset select the file with the image file (often with the extension .bil). If present .clr color table files are read, but not written. If present, image.rep file will be read to extract the projection system of SpatioCarte Defense 1.0 raster products.
This driver does not always do well differentiating between floating point and integer data. The GDAL extension to the .hdr format to differentiate is to add a field named PIXELTYPE with values of either FLOAT, SIGNEDINT or UNSIGNEDINT. In combination with the NBITS field it is possible to described all variations of pixel types.
eg.
ncols 1375 nrows 649 cellsize 0.050401 xllcorner -130.128639 yllcorner 20.166799 nodata_value 9999.000000 nbits 32 pixeltype float byteorder msbfirst
This driver may be sufficient to read GTOPO30 data.
NOTE: Implemented as gdal/frmts/raw/ehdrdataset.cpp.
See Also:
This is a read-only reader for ECRG (Enhanced Compressed Raster Graphic) products, that uses the table of content file, TOC.xml, and exposes it as a virtual dataset whose coverage is the set of ECRG frames contained in the table of content.
The driver will report a different subdataset for each subdataset found in the TOC.xml file.
Result of a gdalinfo on a TOC.xml file.
Subdatasets: SUBDATASET_1_NAME=ECRG_TOC_ENTRY:ECRG:FalconView:ECRG_Sample/EPF/TOC.xml SUBDATASET_1_DESC=ECRG:FalconView
See Also:
NOTE: Implemented as gdal/frmts/nitf/ecrgtocdataset.cpp
GDAL supports the Erdas Imagine Raw format for read access including 1, 2, 4, 8, 16 and 32bit unsigned integers, 16 and 32bit signed integers and 32 and 64bit complex floating point. Georeferencing is supported.
To open a dataset select the file with the header information. The driver finds the image file from the header information. Erdas documents call the header file the "raw" file and it may have the extension .raw while the image file that contains the actual raw data may have the extension .bl.
NOTE: Implemented as gdal/frmts/raw/eirdataset.cpp.
GDAL supports some variations of raw raster files with associated ENVI style .hdr files describing the format. To select an existing ENVI raster file select the binary file containing the data (as opposed to the .hdr file), and GDAL will find the .hdr file by replacing the dataset extension with .hdr.
GDAL should support reading bil, bip and bsq interleaved formats, and most pixel types are supported, including 8bit unsigned, 16 and 32bit signed and unsigned integers, 32bit and 64 bit floating point, and 32bit and 64bit complex floating point. There is limited support for recognising map_info keywords with the coordinate system and georeferencing. In particular, UTM and State Plane should work.
Creation Options:
NOTE: Implemented as gdal/frmts/raw/envidataset.cpp.
GDAL supports the Envisat product format for read access. All sample types are supported. Files with two matching measurement datasets (MDS) are represented as having two bands. Currently all ASAR Level 1 and above products, and some MERIS and AATSR products are supported.
The control points of the GEOLOCATION GRID ADS dataset are read if available, generally giving a good coverage of the dataset. The GCPs are in WGS84.
Virtually all key/value pairs from the MPH and SPH (primary and secondary headers) are copied through as dataset level metadata.
ASAR and MERIS parameters contained in the ADS and GADS records (excluded geolocation ones) can be retrieved as key/value pairs using the "RECORDS" metadata domain.
NOTE: Implemented as gdal/frmts/envisat/envisatdataset.cpp.
See Also: Envisat Data Products at ESA.
FITS is a format used mainly by astronomers, but it is a relatively simple format that supports arbitrary image types and multi-spectral images, and so has found its way into GDAL. FITS support is implemented in terms of the standard CFITSIO library, which you must have on your system in order for FITS support to be enabled. Both reading and writing of FITS files is supported. At the current time, no support for a georeferencing system is implemented, but WCS (World Coordinate System) support is possible in the future.
Non-standard header keywords that are present in the FITS file will be copied to the dataset's metadata when the file is opened, for access via GDAL methods. Similarly, non-standard header keywords that the user defines in the dataset's metadata will be written to the FITS file when the GDAL handle is closed.
Note to those familiar with the CFITSIO library: The automatic rescaling of data values, triggered by the presence of the BSCALE and BZERO header keywords in a FITS file, is disabled in GDAL. Those header keywords are accessible and updatable via dataset metadata, in the same was as any other header keywords, but they do not affect reading/writing of data values from/to the file.
NOTE: Implemented as gdal/frmts/fits/fitsdataset.cpp.
This driver supporting reading "Generic Binary" files labelled with a .hdr file, but distinct from the more common ESRI labelled .hdr format (EHdr driver). The origin of this format is not entirely clear. The .hdr files supported by this driver are look something like this:
{{{ BANDS: 1 ROWS: 6542 COLS: 9340 ... }}}
Pixel data types of U8, U16, S16, F32, F64, and U1 (bit) are supported. Georeferencing and coordinate system information should be supported when provided.
NOTE: Implemented as gdal/frmts/raw/genbindataset.cpp.
(GDAL >= 1.9.0)
Supports reading GRASS ASCII grid format (similar to Arc/Info ASCIIGRID command).
By default, the datatype returned for GRASS ASCII grid datasets by GDAL is autodetected, and set to Float32 for grid with floating point values or Int32 otherwise. This is done by analysing the format of the null value and the first 100k bytes of data of the grid. You can also explictely specify the datatype by setting the GRASSASCIIGRID_DATATYPE configuration option (Int32, Float32 and Float64 values are supported currently)
NOTE: Implemented as gdal/frmts/aaigrid/aaigriddataset.cpp.
This is the ASCII-based (human-readable) version of one of the raster formats used by Golden Software products (such as the Surfer series). This format is supported for both reading and writing (including create, delete, and copy). Currently the associated formats for color, metadata, and shapes are not supported.
NOTE: Implemented as gdal/frmts/gsg/gsagdataset.cpp.
This is the binary (non-human-readable) version of one of the raster formats used by Golden Software products (such as the Surfer series). Like the ASCII version, this format is supported for both reading and writing (including create, delete, and copy). Currently the associated formats for color, metadata, and shapes are not supported.
NOTE: Implemented as gdal/frmts/gsg/gsbgdataset.cpp.
This is the binary (non-human-readable) version of one of the raster formats used by Golden Software products (such as the Surfer series). This format differs from the GSBG format (also known as Surfer 6 binary grid format), it is more complicated and flexible. This format is supported for reading only.
NOTE: Implemented as gdal/frmts/gsg/gs7bgdataset.cpp.
This is a raster exchange format propagated by Geosoft, and made a standard in the gravity/magnetics field. GDAL supports reading (but not writing) GXF-3 files, including support for georeferencing information, and projections.
By default, the datatype returned for GXF datasets by GDAL is Float32. From GDAL 1.8.0, you can specify the datatype by setting the GXF_DATATYPE configuration option (Float64 supported currently)
Details on the supporting code, and format can be found on the GXF-3 page.
NOTE: Implemented as gdal/frmts/gxf/gxfdataset.cpp.
GDAL supports reading and writing IDA images with some limitations. IDA images are the image format of WinDisp 4. The files are always one band only of 8bit data. IDA files often have the extension .img though that is not required.
Projection and georeferencing information is read though some projections (ie. Meteosat, and Hammer-Aitoff) are not supported. When writing IDA files the projection must have a false easting and false northing of zero. The support coordinate systems in IDA are Geographic, Lambert Conformal Conic, Lambert Azimuth Equal Area, Albers Equal-Area Conic and Goodes Homolosine.
IDA files typically contain values scaled to 8bit via a slope and offset. These are returned as the slope and offset values of the bands and they must be used if the data is to be rescaled to original raw values for analysis.
NOTE: Implemented as gdal/frmts/raw/idadataset.cpp.
See Also: WinDisp
This driver implements reading and writing of ILWIS raster maps and map lists. Select the raster files with the.mpr (for raster map) or .mpl (for maplist) extensions
Features:
Limitations:
GDAL includes read support for Japanese DEM files, normally having the extension .mem. These files are a product of the Japanese Geographic Survey Institute.
These files are represented as having one 32bit floating band with elevation data. The georeferencing of the files is returned as well as the coordinate system (always lat/long on the Tokyo datum).
There is no update or creation support for this format.
NOTE: Implemented as gdal/frmts/jdem/jdemdataset.cpp.
See Also: Geographic Survey Institute (GSI) Web Site.
GDAL supports reading and writing Erdas 7.x .LAN and .GIS raster files. Currently 4bit, 8bit and 16bit pixel data types are supported for reading and 8bit and 16bit for writing.
GDAL does read the map extents (geotransform) from LAN/GIS files, and attempts to read the coordinate system informaton. However, this format of file does not include complete coordinate system information, so for state plane and UTM coordinate systems a LOCAL_CS definition is returned with valid linear units but no other meaningful information.
The .TRL, .PRO and worldfiles are ignored at this time.
NOTE: Implemented as gdal/frmts/raw/landataset.cpp
Development of this driver was financially supported by Kevin Flanders of (PeopleGIS).
GDAL includes read, update, and creation support for Vexcel's MFF raster format. MFF dataset consist of a header file (typically with the extension .hdr) and a set of data files with extensions like .x00, .b00 and so on. To open a dataset select the .hdr file.
Reading lat/long GCPs (TOP_LEFT_CORNER, ...) is supported but there is no support for reading affine georeferencing or projection information.
Unrecognised keywords from the .hdr file are preserved as metadata.
All data types with GDAL equivelents are supported, including 8, 16, 32 and 64 bit data precisions in integer, real and complex data types. In addition tile organized files (as produced by the Vexcel SAR Processor - APP) are supported for reading.
On creation (with a format code of MFF) a simple, ungeoreferenced raster file is created.
MFF files are not normally portable between systems with different byte orders. However GDAL honours the new BYTE_ORDER keyword which can take a value of LSB (Integer -- little endian), and MSB (Motorola -- big endian). This may be manually added to the .hdr file if required.
NOTE: Implemented as gdal/frmts/raw/mffdataset.cpp.
GDAL has limited support for reading NLAPS Data Format files. This is a format primarily used by the Eros Data Center for distribution of Landsat data. NDF datasets consist of a header file (often with the extension .H1) and one or more associated raw data files (often .I1, .I2, ...). To open a dataset select the header file, often with the extension .H1, .H2 or .HD.
The NDF driver only supports 8bit data. The only supported projection is UTM. NDF version 1 (NDF_VERSION=0.00) and NDF version 2 are both supported.
NOTE: Implemented as gdal/frmts/raw/ndfdataset.cpp.
See Also: NLAPS Data Format Specification.
GDAL has limited support for reading and writing netCDF grid files. NetCDF files that are not recognised as grids (they lack variables called dimension, and z) will be silently ignored by this driver. This driver is primarily intended to provide a mechanism for grid interchange with the GMT package. The netCDF driver should be used for more general netCDF datasets.
The units information in the file will be ignored, but x_range, and y_range information will be read to get georeferenced extents of the raster. All netCDF data types should be supported for reading.
Newly created files (with a type of GMT) will always have units of "meters" for x, y and z but the x_range, y_range and z_range should be correct. Note that netCDF does not have an unsigned byte data type, so 8bit rasters will generally need to be converted to Int16 for export to GMT.
NetCDF support in GDAL is optional, and not compiled in by default.
NOTE: Implemented as gdal/frmts/netcdf/gmtdataset.cpp.
See Also: Unidata NetCDF Page
GDAL includes a partial implementation of the PCI .aux labelled raw raster file for read, write and creation. To open a PCI labelled file, select the raw data file itself. The .aux file (which must have a common base name) will be checked for automatically.
The format type for creating new files is PAux. All PCI data types (8U, 16U, 16S, and 32R) are supported. Currently georeferencing, projections, and other metadata is ignored.
Creation Options:
NOTE: Implemented as gdal/frmts/raw/pauxdataset.cpp.
See Also: PCI's .aux Format Description
GDAL includes support for reading and writing PCRaster raster files. PCRaster is a dynamic modelling system for distributed simulation models. The main applications of PCRaster are found in environmental modelling: geography, hydrology, ecology to name a few. Examples include rainfall-runoff models, vegetation competition models and slope stability models.
The driver reads all types of PCRaster maps: booleans, nominal, ordinals, scalar, directional and ldd. The same cell representation used to store values in the file is used to store the values in memory.
The driver detects whether the source of the GDAL raster is a PCRaster file. When such a raster is written to a file the value scale of the original raster will be used. The driver always writes values using UINT1, INT4 or REAL4 cell representations, depending on the value scale:
Value scale | Cell representation |
---|---|
VS_BOOLEAN | CR_UINT1 |
VS_NOMINAL | CR_INT4 |
VS_ORDINAL | CR_INT4 |
VS_SCALAR | CR_REAL4 |
VS_DIRECTION | CR_REAL4 |
VS_LDD | CR_UINT1 |
For rasters from other sources than a PCRaster raster file a value scale and cell representation is determined according to the folowing rules:
Source type | Target value scale | Target cell representation |
---|---|---|
GDT_Byte | VS_BOOLEAN | CR_UINT1 |
GDT_Int32 | VS_NOMINAL | CR_INT4 |
GDT_Float32 | VS_SCALAR | CR_REAL4 |
GDT_Float64 | VS_SCALAR | CR_REAL4 |
The driver can convert values from one supported cell representation to another. It cannot convert to unsupported cell representations. For example, it is not possible to write a PCRaster raster file from values which are used as CR_INT2 (GDT_Int16).
Although the de-facto file extension of a PCRaster raster file is .map, the PCRaster software does not require a standardized file extension.
NOTE: Implemented as gdal/frmts/pcraster/pcrasterdataset.cpp .
See also: PCRaster website at Utrecht University and PCRaster Environmental Software company website.
GDAL includes support for reading, and creating .png files. Greyscale, pseudo-colored, Paletted, RGB and RGBA PNG files are supported as well as precisions of eight and sixteen bits per sample.
PNG files are linearly compressed, so random reading of large PNG files can be very inefficient (resulting in many restarts of decompression from the start of the file).
Text chunks are translated into metadata, typically with multiple lines per item. World files with the extensions of .pgw, .pngw or .wld will be read. Single transparency values in greyscale files will be recognised as a nodata value in GDAL. Transparent index in paletted images are preserved when the color table is read.
PNG files can be created with a type of PNG, using the CreateCopy() method, requiring a prototype to read from. Writing includes support for the various image types, and will preserve transparency/nodata values. Georeferencing .wld files are written if option WORLDFILE setted. All pixel types other than 16bit unsigned will be written as eight bit.
Starting with GDAL 1.9.0, XMP metadata can be extracted from the file, and will be stored as XML raw content in the xml:XMP metadata domain.
Creation Options:
NOTE: Implemented as gdal/frmts/png/pngdataset.cpp.
PNG support is implemented based on the libpng reference library. More information is available at http://www.libpng.org/pub/png.
GDAL includes support for reading, and creating .pgm (greyscale), and .ppm (RGB color) files compatible with the Netpbm tools. Only the binary (raw) formats are supported.
Netpbm files can be created with a type of PNM.
Creation Options:
NOTE: Implemented as gdal/frmts/raw/pnmdataset.cpp.
This is a read-only reader for RPF products, like CADRG or CIB, that uses the table of content file - A.TOC - from a RPF exchange, and exposes it as a virtual dataset whose coverage is the set of frames contained in the table of content.
The driver will report a different subdataset for each subdataset found in the A.TOC file.
Result of a gdalinfo on a A.TOC file.
Subdatasets: SUBDATASET_1_NAME=NITF_TOC_ENTRY:CADRG_GNC_5M_1_1:GNCJNCN/rpf/a.toc SUBDATASET_1_DESC=CADRG:GNC:Global Navigation Chart:5M:1:1 [...] SUBDATASET_5_NAME=NITF_TOC_ENTRY:CADRG_GNC_5M_7_5:GNCJNCN/rpf/a.toc SUBDATASET_5_DESC=CADRG:GNC:Global Navigation Chart:5M:7:5 SUBDATASET_6_NAME=NITF_TOC_ENTRY:CADRG_JNC_2M_1_6:GNCJNCN/rpf/a.toc SUBDATASET_6_DESC=CADRG:JNC:Jet Navigation Chart:2M:1:6 [...] SUBDATASET_13_NAME=NITF_TOC_ENTRY:CADRG_JNC_2M_8_13:GNCJNCN/rpf/a.toc SUBDATASET_13_DESC=CADRG:JNC:Jet Navigation Chart:2M:8:13
In some situations, NITF tiles inside a subdataset don't share the same palettes. The RPFTOC driver will do its best to remap palettes to the reported palette by gdalinfo (which is the palette of the first tile of the subdataset). In situations where it wouldn't give a good result, you can try to set the RPFTOC_FORCE_RGBA environment variable to TRUE before opening the subdataset. This will cause the driver to expose the subdataset as a RGBA dataset, instead of a paletted one.
It is possible to build external overviews for a subdataset. The overview for the first subdataset will be named A.TOC.1.ovr for example, for the second dataset it will be A.TOC.2.ovr, etc. Note that you must re-open the subdataset with the same setting of RPFTOC_FORCE_RGBA as the one you have used when you have created it. Do not use any method other than NEAREST resampling when building overviews on a paletted subdataset (RPFTOC_FORCE_RGBA unset)
A gdalinfo on one of this subdataset will return the various NITF metadata, as well as the list of the NITF tiles of the subdataset.
See Also:
NOTE: Implemented as gdal/frmts/nitf/rpftocdataset.cpp
This is a read-only reader for CEOS SAR image files. To use, select the main imagery file.
This driver works with most Radarsat and ERS data products, including single look complex products; however, it is unlikely to work for non-Radar CEOS products. The simpler CEOS driver is often appropriate for these.
This driver will attempt to read 15 lat/long GCPS by sampling the per-scanline CEOS superstructure information. It also captures various pieces of metadata from various header files, including:
CEOS_LOGICAL_VOLUME_ID=EERS-1-SAR-MLD CEOS_PROCESSING_FACILITY=APP CEOS_PROCESSING_AGENCY=CCRS CEOS_PROCESSING_COUNTRY=CANADA CEOS_SOFTWARE_ID=APP 1.62 CEOS_ACQUISITION_TIME=19911029162818919 CEOS_SENSOR_CLOCK_ANGLE= 90.000 CEOS_ELLIPSOID=IUGG_75 CEOS_SEMI_MAJOR= 6378.1400000 CEOS_SEMI_MINOR= 6356.7550000
The SAR_CEOS driver also includes some support for SIR-C and PALSAR polarimetric data. The SIR-C format contains an image in compressed scattering matrix form, described here. GDAL decompresses the data as it is read in. The PALSAR format contains bands that correspond almost exactly to elements of the 3x3 Hermitian covariance matrix- see the ERSDAC-VX-CEOS-004A.pdf document for a complete description (pixel storage is described on page 193). GDAL converts these to complex floating point covariance matrix bands as they are read in. The convention used to represent the covariance matrix in terms of the scattering matrix elements HH, HV (=VH), and VV is indicated below. Note that the non-diagonal elements of the matrix are complex values, while the diagonal values are real (though represented as complex bands).
The identities of the bands are also reflected in the metadata.
NOTE: Implemented as gdal/frmts/ceos2/sar_ceosdataset.cpp.
(starting with GDAL 1.7.0)
The driver supports both reading and writing (including create, delete, and copy) SAGA GIS binary grids. SAGA binary grid datasets are made of an ASCII header (.SGRD) and a binary data (.SDAT) file with a common basename. The .SDAT file should be selected to access the dataset.
The driver supports reading the following SAGA datatypes (in brackets the corresponding GDAL types): BIT (GDT_Byte), BYTE_UNSIGNED (GDT_Byte), BYTE (GDT_Byte), SHORTINT_UNSIGNED (GDT_UInt16), SHORTINT (GDT_Int16), INTEGER_UNSIGNED (GDT_UInt32), INTEGER (GDT_Int32), FLOAT (GDT_Float32) and DOUBLE (GDT_Float64).
The driver supports writing the following SAGA datatypes: BYTE_UNSIGNED (GDT_Byte), SHORTINT_UNSIGNED (GDT_UInt16), SHORTINT (GDT_Int16), INTEGER_UNSIGNED (GDT_UInt32), INTEGER (GDT_Int32), FLOAT (GDT_Float32) and DOUBLE (GDT_Float64).
Currently the driver does not support zFactors other than 1 and reading SAGA grids which are written TOPTOBOTTOM.
NOTE: Implemented as gdal/frmts/saga/sagadataset.cpp.
GDAL includes support for reading USGS SDTS formatted DEMs. USGS DEMs are always returned with a data type of signed sixteen bit integer, or 32bit float. Projection and georeferencing information is also returned.
SDTS datasets consist of a number of files. Each DEM should have one file with a name like XXXCATD.DDF. This should be selected to open the dataset.
The elevation units of DEMs may be feet or meters. The GetType() method on a band will attempt to return if the units are Feet ("ft") or Meters ("m").
NOTE: Implemented as gdal/frmts/sdts/sdtsdataset.cpp.
The SGI driver currently supports the reading and writing of SGI Image files.
The driver currently supports 1, 2, 3, and 4 band images. The driver currently supports "8 bit per channel value" images. The driver supports both uncompressed and run-length encoded (RLE) images for reading, but created files are always RLE compressed..
The GDAL SGI Driver was based on Paul Bourke's SGI image read code.
See Also:
NOTE: Implemented as gdal/frmts/sgi/sgidataset.cpp.
(GDAL >= 1.9.0)
This is a conveniency driver to read Snow Data Assimilation System data. Those files contain Int16 raw binary data. The file to provide to GDAL is the .Hdr file.
Snow Data Assimilation System (SNODAS) Data Products at NSIDC
NOTE: Implemented as gdal/frmts/raw/snodasdataset.cpp.
(starting with GDAL 1.7.0)
The ASRP and USRP raster products (as defined by DGIWG) are variations on a common standard product format and are supported for reading by GDAL. ASRP and USRP datasets are made of several files - typically a .GEN, .IMG, .SOU and .QAL file with a common basename. The .IMG file should be selected to access the dataset.
ASRP (in a geographic coordinate system) and USRP (in a UTM/UPS coordinate system) products are single band images with a palette and georeferencing.
NOTE: Implemented as gdal/frmts/adrg/srpdataset.cpp.
The SRTM HGT driver currently supports the reading of SRTM-3 and SRTM-1 V2 (HGT) files.
The driver does support creating new files, but the input data must be exactly formatted as a SRTM-3 or SRTM-1 cell. That is the size, and bounds must be appropriate for a cell.
See Also:
NOTE: Implemented as gdal/frmts/srtmhgt/srtmhgtdataset.cpp.
A world file file is a plain ASCII text file consisting of six values separated by newlines. The format is:
pixel X size rotation about the Y axis (usually 0.0) rotation about the X axis (usually 0.0) negative pixel Y size X coordinate of upper left pixel center Y coordinate of upper left pixel center
For example:
60.0000000000 0.0000000000 0.0000000000 -60.0000000000 440750.0000000000 3751290.0000000000
You can construct that file simply by using your favorite text editor.
World file usually has suffix .wld, but sometimes it may has .tfw, tifw, .jgw or other suffixes depending on the image file it comes with.
GDAL includes support for reading and writing XPM (X11 Pixmap Format) image files. These are colormapped one band images primarily used for simple graphics purposes in X11 applications. It has been incorporated in GDAL primarily to ease translation of GDAL images into a form useable with the GTK toolkit.
The XPM support does not support georeferencing (not available from XPM files) nor does it support XPM files with more than one character per pixel. New XPM files must be colormapped or greyscale, and colortables will be reduced to about 70 colors automatically.
NOTE: Implemented as gdal/frmts/xpm/xpmdataset.cpp.
This read-only GDAL driver is designed to provide access to processed data from Sandia National Laboratories' various experimental sensors. The format is essentially an arbitrary length header containing instrument configuration and performance parameters along with a binary matrix of 16- or 32-bit complex or byte real data.
The GFF format was implemented based on the Matlab code provided by Sandia to read the data. The driver supports all types of data (16-bit or 32-bit complex, real bytes) theoretically, however due to a lack of data only 32-bit complex data has been tested.
Sandia provides some sample data at http://sandia.gov/RADAR/sar-data.html.
The extension for GFF formats is .gff.
NOTE: Implemented as gdal/frmts/gff/gff_dataset.cpp.
(GDAL >= 1.9.0)
Supported for read access and creation. This format is an ASCII interchange format for gridded data in an ASCII line format for transport and storage. It is commonly used in applications in the Oil and Gas Exploration field.
By default, files are interpreted and written according to the PIXEL_IS_AREA convention. If you define the ZMAP_PIXEL_IS_POINT configuration option to TRUE, the PIXEL_IS_POINT convention will be followed to interprete/write the file (the georeferenced values in the header of the file will then be considered as the coordinate of the center of the pixels). Note that in that case, GDAL will report the extent with its usual PIXEL_IS_AREA convention (the coordinates of the topleft corner as reported by GDAL will be a half-pixel at the top and left of the values that appear in the file).
Informal specification given in this GDAL-dev mailing list thread
NOTE: Implemented as gdal/frmts/zmap/zmapdataset.cpp.
Full list of GDAL Raster Formats
$Id: frmt_various.html 22861 2011-08-04 20:13:27Z rouault $
Accessing a WCS server is accomplished by creating a local service description xml file looking something like the following, with the coverage server url, and the name of the coverage to access. It is important that there be no spaces or other content before the <WCS_GDAL> element.
<WCS_GDAL> <ServiceURL>http://laits.gmu.edu/cgi-bin/NWGISS/NWGISS?</ServiceURL> <CoverageName>AUTUMN.hdf</CoverageName> </WCS_GDAL>When first opened, GDAL will fetch the coverage description, and a small test image to establish details about the raster. This information will be cached in the service description file to make future opens faster - no server access should be required till imagery is read for future opens.
The WCS driver should support WCS 1.0.0 and 1.1.0 servers, but WCS 0.7 servers are not supported. Any return format that is a single file, and is in a format supported by GDAL should work. The driver will prefer a format with "tiff" in the name, otherwise it will fallback to the first offered format. Coordinate systems are read from the DescribeCoverage result, and are expected to be in the form of EPSG:n in the <supportedCRSs> element.
The service description file has the following additional elements as immediate children of the WCS_GDAL element that may be optionally set.
Note that time based subdatasets are not supported when the service description is the filename. Currently time support is not available for versions other than WCS 1.0.0.
See Also:
Starting with GDAL 1.9.0, GDAL can read and write WebP images through the WebP library.
WebP is a new image format that provides lossy compression for photographic images. A WebP file consists of VP8 image data, and a container based on RIFF.
The driver rely on the Open Source WebP library (BSD licenced). The WebP library (at least in its version 0.1) only offers compression and decompression of whole images, so RAM might be a limitation when dealing with big images.
The WEBP driver only supports 3 bands (RGB) images
The WEBP driver can be used as the internal format used by the Rasterlite driver.
QUALITY=n By default the quality flag is set to 75, but this option can be used to select other values. Values must be in the range 1-100. Low values result in higher compression ratios, but poorer image quality.
See Also:
Accessing several different types of web image services is possible using the WMS format in GDAL. Services are accessed by creating a local service description XML file -- there are examples below for each of the supported image services. It is important that there be no spaces or other content before the <GDAL_WMS> element.
<GDAL_WMS> | |
<Service name=" WMS"> | Define what mini-driver to use, currently supported are: WMS, WorldWind, TileService, TMS, TiledWMS or VirtualEarth. (required) |
<Version> 1.1.1</Version> | WMS version. (optional, defaults to 1.1.1) |
<ServerUrl> http://onearth.jpl.nasa.gov/wms.cgi?</ServerUrl> | WMS server URL. (required) |
<SRS> EPSG:4326</SRS> | Image projection (optional, defaults to EPSG:4326, WMS version 1.1.1 or below only) |
<CRS> CRS:83</CRS> | Image projection (optional, defaults to EPSG:4326, WMS version 1.3.0 or above only) |
<ImageFormat> image/jpeg</ImageFormat> | Format in which to request data. Paletted formats like image/gif will be converted to RGB. (optional, defaults to image/jpeg) |
<Transparent> FALSE</Transparent> | Set to TRUE to include "transparent=TRUE" in the WMS GetMap request (optional defaults to FALSE). The request format and BandsCount need to support alpha. |
<Layers> modis,global_mosaic</Layers> | Comma separated list of layers. (required, except for TiledWMS) |
<TiledGroupName> Clementine</TiledGroupName> | Comma separated list of layers. (required for TiledWMS) |
<Styles></Styles> | Comma separated list of styles. (optional) |
<BBoxOrder>xyXY</BBoxOrder> |
Reorder bbox coordinates arbitrarly. May be required for version 1.3
servers. (optional)
x - low X coordinate, y - low Y coordinate, X - high X coordinate, Y - high Y coordinate |
</Service> | |
<DataWindow> | Define size and extents of the data. (required, except for TiledWMS and VirtualEarth) |
<UpperLeftX> -180.0</UpperLeftX> | X (longitude) coordinate of upper-left corner. (optional, defaults to -180.0, except for VirtualEarth) |
<UpperLeftY> 90.0</UpperLeftY> | Y (latitude) coordinate of upper-left corner. (optional, defaults to 90.0, except for VirtualEarth) |
<LowerRightX> 180.0</LowerRightX> | X (longitude) coordinate of lower-right corner. (optional, defaults to 180.0, except for VirtualEarth) |
<LowerRightY> -90.0</LowerRightY> | Y (latitude) coordinate of lower-right corner. (optional, defaults to -90.0, except for VirtualEarth) |
<SizeX> 2666666</SizeX> | Image size in pixels. |
<SizeY> 1333333</SizeY> | Image size in pixels. |
<TileX> 0</TileX> | Added to tile X value at highest resolution. (ignored for WMS, tiled image sources only, optional, defaults to 0) |
<TileY> 0</TileY> | Added to tile Y value at highest resolution. (ignored for WMS, tiled image sources only, optional, defaults to 0) |
<TileLevel> 0</TileLevel> | Tile level at highest resolution. (tiled image sources only, optional, defaults to 0) |
<TileCountX> 0</TileCountX> | Can be used to define image size, SizeX = TileCountX * BlockSizeX * 2TileLevel. (tiled image sources only, optional, defaults to 0) |
<TileCountY> 0</TileCountY> | Can be used to define image size, SizeY = TileCountY * BlockSizeY * 2TileLevel. (tiled image sources only, optional, defaults to 0) |
<YOrigin> top</YOrigin> | Can be used to define the position of the Y origin with respect to the tile grid. Possible values are 'top', 'bottom', and 'default', where the default behavior is mini-driver-specific. (TMS mini-driver only, optional, defaults to 'bottom' for TMS) |
</DataWindow> | |
<Projection> EPSG:4326</Projection> | Image projection (optional, defaults to value reported by mini-driver or EPSG:4326) |
<BandsCount> 3</BandsCount> | Number of bands/channels, 1 for grayscale data, 3 for RGB. (optional, defaults to 3) |
<BlockSizeX> 1024</BlockSizeX> | Block size in pixels. (optional, defaults to 1024, except for VirtualEarth) |
<BlockSizeY> 1024</BlockSizeY> | Block size in pixels. (optional, defaults to 1024, except for VirtualEarth) |
<OverviewCount> 10</OverviewCount> | Count of reduced resolution layers each having 2 times lower resolution. (optional, default is calculated at runtime) |
<Cache> | Enable local disk cache. Allows for offline operation. (optional, defaults to no cache) |
<Path> ./gdalwmscache</Path> | Location where to store cache files. It is safe to use same cache path for different data sources. (optional, defaults to ./gdalwmscache) |
<Depth> 2</Depth> | Number of directory layers. 2 will result in files being written as cache_path/A/B/ABCDEF... (optional, defaults to 2) |
<Extension> .jpg</Extension> | Append to cache files. (optional, defaults to none) |
</Cache> | |
<MaxConnections> 2</MaxConnections> | Maximum number of simultaneous connections. (optional, defaults to 2) |
<Timeout> 300</Timeout> | Connection timeout in seconds. (optional, defaults to 300) |
<OfflineMode> true</OfflineMode> | Do not download any new images, use only what is in cache. Usefull only with cache enabled. (optional, defaults to false) |
<AdviseRead> true</AdviseRead> | Enable AdviseRead API call - download images into cache. (optional, defaults to false) |
<VerifyAdviseRead> true</VerifyAdviseRead> | Open each downloaded image and do some basic checks before writing into cache. Disabling can save some CPU cycles if server is trusted to always return correct images. (optional, defaults to true) |
<ClampRequests> false</ClampRequests> | Should requests, that otherwise would be partially outside of defined data window, be clipped resulting in smaller than block size request. (optional, defaults to true) |
<UserAgent> GDAL WMS driver (http://www.gdal.org/frmt_wms.html)</UserAgent> | HTTP User-agent string. Some servers might require a well-known user-agent such as "Mozilla/5.0" (optional, defaults to "GDAL WMS driver (http://www.gdal.org/frmt_wms.html)"). Added in GDAL 1.8.0 |
<Referer> http://example.foo/</Referer> | HTTP Referer string. Some servers might require it (optional). Added in GDAL 1.9.0 |
</GDAL_WMS> | |
The GDAL WMS driver has support for several internal 'minidrivers', which allow access to different web mapping services. Each of these services may support a different set of options in the Service block.
Communications with an OGC WMS server. Has support for both tiled and untiled requests.
Service to support talking to a WorldWind TileService. Access is always tile based.
Access to web-based WorldWind tile services. Access is always tile based.
The TMS Minidriver is designed primarily to support the users of the TMS Specification. This service supports only access by tiles.
Because TMS is similar to many other 'x/y/z' flavored services on the web, this service can also be used to access these services. To use it in this fashion, you can use replacement variables, of the format ${x}, ${y}, etc.
Supported variables (name is case sensitive) are :
A typical ServerURL might look like:
http://labs.metacarta.com/wms-c/Basic.py/${version}/${layer}/${z}/${x}/$
{y}.${format}
In order to better suit TMS users, any URL that does not contain
"${" will automatically have the string above (after "Basic.py/")
appended to their URL.
The TMS Service has 3 XML configuration elements that are different from other services: Format which defaults to jpg, Layer which has no default, and Version which defaults to 1.0.0.
Additionally, the TMS service respects one additional parameter, at the DataWindow level, which is the YOrigin element. This element should be one of bottom (the default in TMS) or top, which matches OpenStreetMap and many other popular tile services.
Two examples of usage of the TMS service are included in the examples below.
The OnEarth Tiled WMS minidriver supports the Tiled WMS specification implemented for the JPL OnEarth driver per the specification at http://onearth.jpl.nasa.gov/tiled.html.
A typical OnEarth Tiled WMS configuration file might look like:
<GDAL_WMS> <Service name="TiledWMS"> <ServerUrl>http://onmoon.jpl.nasa.gov/wms.cgi?</ServerUrl> <TiledGroupName>Clementine</TiledGroupName> </Service> </GDAL_WMS>Most of the other information is automatically fetched from the remote server using the GetTileService method at open time.
Access to web-based Virtual Earth tile services. Access is always tile based.
The ${quadkey} variable must be found in the ServerUrl element.
The DataWindow element might be omitted. The default values are :
onearth_global_mosaic.xml - Landsat mosaic from a
OnEarth WMS server
gdal_translate -of JPEG -outsize 500 250 onearth_global_mosaic.xml onearth_global_mosaic.jpg
gdal_translate -of JPEG -projwin -10 55 30 35 -outsize 500 250 onearth_global_mosaic.xml onearth_global_mosaic2.jpg
metacarta_wmsc.xml - It is possible to configure a WMS Service conforming to a WMS-C cache by specifying a number of overviews and specifying the 'block size' as the tile size of the cache. The following example is a sample set up for a 19-level "Global Profile" WMS-C cache.
gdal_translate -of PNG -outsize 500 250 metacarta_wmsc.xml metacarta_wmsc.png
tileservice_bmng.xml - TileService, Blue Marble NG (January)
gdal_translate -of JPEG -outsize 500 250 tileservice_bmng.xml tileservice_bmng.jpg
tileservice_nysdop2004.xml - TileService, NYSDOP 2004
gdal_translate -of JPEG -projwin -73.687030 41.262680 -73.686359 41.262345 -outsize 500 250 tileservice_nysdop2004.xml tileservice_nysdop2004.jpg
OpenStreetMap TMS Service Example: Connect to OpenStreetMap tile
service. Note that this file takes advantage of the tile cache; more
information about configuring the tile cache settings is available
above.
gdal_translate -of PNG -outsize 512 512
frmt_wms_openstreetmap_tms.xml openstreetmap.png
MetaCarta
TMS Layer Example, accessing the default MetaCarta TMS layer.
gdal_translate -of PNG -outsize 512 256
frmt_wms_metacarta_tms.xml metacarta.png
BlueMarble Amazon S3 Example accessed with the TMS minidriver.
Google Maps accessed with the TMS minidriver.
ArcGIS MapServer Tiles accessed with the TMS minidriver.
Swiss Geoportal maps accessed with the TMS minidriver (needs GDAL >= 1.9.0)
OnEarth Tiled WMS Clementine, daily , and srtm examples.
VirtualEarth Aerial Layer accessed with the VirtualEarth minidriver.
a local service description XML file :
gdalinfo description_file.xml
the content of a description XML file provided as filename :
gdalinfo "<GDAL_WMS><Service name=\"TiledWMS\"><ServerUrl> http://onearth.jpl.nasa.gov/wms.cgi?</ServerUrl><TiledGroupName>Global SRTM Elevation</TiledGroupName></Service></GDAL_WMS>"
(GDAL >= 1.9.0) the base URL of a WMS service, prefixed with WMS: :
gdalinfo "WMS:http://wms.geobase.ca/wms-bin/cubeserv.cgi"
A list of subdatasets will be returned, resulting from the parsing of the GetCapabilities request on that server.
(GDAL >= 1.9.0) a pseudo GetMap request, such as the subdataset name returned by the previous syntax :
gdalinfo "WMS:http://wms.geobase.ca/wms-bin/cubeserv.cgi?SERVICE=WMSVERSION= 1.1.1REQUEST=GetMapLAYERS=DNEC_250K:ELEVATION/ELEVATIONSRS=EPSG:42304BBOX=-3000000, -1500000,6000000,4500000"
(GDAL >= 1.9.0) the base URL of a Tiled WMS service, prefixed with WMS: and with request=GetTileService as GET argument:
gdalinfo "WMS:http://onearth.jpl.nasa.gov/wms.cgi?request=GetTileService"
A list of subdatasets will be returned, resulting from the parsing of the GetTileService request on that server.
(GDAL >= 1.9.0) the URL of a REST definition for a ArcGIS MapServer:
gdalinfo "http://server.arcgisonline.com/ArcGIS/rest/services/World_Imagery/ MapServer?f=jsonpretty=true"
GDAL supports reading and writting ASCII gridded XYZ raster datasets (ie. ungridded XYZ, LIDAR XYZ etc. must be opened by other means. See the documentation of the gdal_grid utility).
Those datasets are ASCII files with (at least) 3 columns, each line containing the X and Y coordinates of the center of the cell and the value of the cell.
The spacing between each cell must be constant and no missing value is supported. Cells with same Y coordinates must be placed on consecutive lines. For a same Y coordinate value, the lines in the dataset must be organized by increasing X values. The value of the Y coordinate can increase or decrease however. The supported column separators are space, comma, semicolon and tabulations.
The driver tries to autodetect an header line and will look for 'x', 'lon' or 'east' names to detect the index of the X column, 'y', 'lat' or 'north' for the Y column and 'z', 'alt' or 'height' for the Z column. If no header is present or one of the column could not be identified in the header, the X, Y and Z columns (in that order) are assumed to be the first 3 columns of each line.
The opening of a big dataset can be slow as the driver must scan the whole file to determine the dataset size and spatial resolution. The driver will autodetect the data type among Byte, Int16, Int32 or Float32.
Usage:
ogrinfo [--help-general] [-ro] [-q] [-where restricted_where] [-spat xmin ymin xmax ymax] [-fid fid] [-sql statement] [-dialect dialect] [-al] [-so] [-fields={YES/NO}] [-geom={YES/NO/SUMMARY}][--formats] datasource_name [layer [layer ...]]
The ogrinfo program lists various information about an OGR supported data source to stdout (the terminal).
If no layer names are passed then ogrinfo will report a list of available layers (and their layerwide geometry type). If layer name(s) are given then their extents, coordinate system, feature count, geometry type, schema and all features matching query parameters will be reported to the terminal. If no query parameters are provided, all features are reported.
Geometries are reported in OGC WKT format.
Example reporting all layers in an NTF file:
% ogrinfo wrk/SHETLAND_ISLANDS.NTF INFO: Open of `wrk/SHETLAND_ISLANDS.NTF' using driver `UK .NTF' successful. 1: BL2000_LINK (Line String) 2: BL2000_POLY (None) 3: BL2000_COLLECTIONS (None) 4: FEATURE_CLASSES (None)
Example using an attribute query is used to restrict the output of the features in a layer:
% ogrinfo -ro -where 'GLOBAL_LINK_ID=185878' wrk/SHETLAND_ISLANDS.NTF BL2000_LINK INFO: Open of `wrk/SHETLAND_ISLANDS.NTF' using driver `UK .NTF' successful. Layer name: BL2000_LINK Geometry: Line String Feature Count: 1 Extent: (419794.100000, 1069031.000000) - (419927.900000, 1069153.500000) Layer SRS WKT: PROJCS["OSGB 1936 / British National Grid", GEOGCS["OSGB 1936", DATUM["OSGB_1936", SPHEROID["Airy 1830",6377563.396,299.3249646]], PRIMEM["Greenwich",0], UNIT["degree",0.0174532925199433]], PROJECTION["Transverse_Mercator"], PARAMETER["latitude_of_origin",49], PARAMETER["central_meridian",-2], PARAMETER["scale_factor",0.999601272], PARAMETER["false_easting",400000], PARAMETER["false_northing",-100000], UNIT["metre",1]] LINE_ID: Integer (6.0) GEOM_ID: Integer (6.0) FEAT_CODE: String (4.0) GLOBAL_LINK_ID: Integer (10.0) TILE_REF: String (10.0) OGRFeature(BL2000_LINK):2 LINE_ID (Integer) = 2 GEOM_ID (Integer) = 2 FEAT_CODE (String) = (null) GLOBAL_LINK_ID (Integer) = 185878 TILE_REF (String) = SHETLAND I LINESTRING (419832.100 1069046.300,419820.100 1069043.800,419808.300 1069048.800,419805.100 1069046.000,419805.000 1069040.600,419809.400 1069037.400,419827.400 1069035.600,419842 1069031,419859.000 1069032.800,419879.500 1069049.500,419886.700 1069061.400,419890.100 1069070.500,419890.900 1069081.800,419896.500 1069086.800,419898.400 1069092.900,419896.700 1069094.800,419892.500 1069094.300,419878.100 1069085.600,419875.400 1069087.300,419875.100 1069091.100,419872.200 1069094.600,419890.400 1069106.400,419907.600 1069112.800,419924.600 1069133.800,419927.900 1069146.300,419927.600 1069152.400,419922.600 1069153.500,419917.100 1069153.500,419911.500 1069153.000,419908.700 1069152.500,419903.400 1069150.800,419898.800 1069149.400,419894.800 1069149.300,419890.700 1069149.400,419890.600 1069149.400,419880.800 1069149.800,419876.900 1069148.900,419873.100 1069147.500,419870.200 1069146.400,419862.100 1069143.000,419860 1069142,419854.900 1069138.600,419850 1069135,419848.800 1069134.100,419843 1069130,419836.200 1069127.600,419824.600 1069123.800,419820.200 1069126.900,419815.500 1069126.900,419808.200 1069116.500,419798.700 1069117.600,419794.100 1069115.100,419796.300 1069109.100,419801.800 1069106.800,419805.000 1069107.300)
Usage:
Usage: ogr2ogr [--help-general] [-skipfailures] [-append] [-update] [-select field_list] [-where restricted_where] [-progress] [-sql <sql statement>] [-dialect dialect] [-preserve_fid] [-fid FID] [-spat xmin ymin xmax ymax] [-a_srs srs_def] [-t_srs srs_def] [-s_srs srs_def] [-f format_name] [-overwrite] [[-dsco NAME=VALUE] ...] dst_datasource_name src_datasource_name [-lco NAME=VALUE] [-nln name] [-nlt type] [layer [layer ...]] Advanced options : [-gt n] [-clipsrc [xmin ymin xmax ymax]|WKT|datasource|spat_extent] [-clipsrcsql sql_statement] [-clipsrclayer layer] [-clipsrcwhere expression] [-clipdst [xmin ymin xmax ymax]|WKT|datasource] [-clipdstsql sql_statement] [-clipdstlayer layer] [-clipdstwhere expression] [-wrapdateline] [-segmentize max_dist] [-fieldTypeToString All|(type1[,type2]*)] [-splitlistfields] [-maxsubfields val] [-explodecollections] [-zfield field_name]
This program can be used to convert simple features data between file formats performing various operations during the process such as spatial or attribute selections, reducing the set of attributes, setting the output coordinate system or even reprojecting the features during translation.
-f "ESRI Shapefile" -f "TIGER" -f "MapInfo File" -f "GML" -f "PostgreSQL"
Srs_def can be a full WKT definition (hard to escape properly), or a well known definition (ie. EPSG:4326) or a file with a WKT definition.
Advanced options :
Example appending to an existing layer (both flags need to be used):
% ogr2ogr -update -append -f PostgreSQL PG:dbname=warmerda abc.tab
Example reprojecting from ETRS_1989_LAEA_52N_10E to EPSG:4326 and clipping to a bounding box
% ogr2ogr -wrapdateline -t_srs EPSG:4326 -clipdst -5 40 15 55 france_4326.shp europe_laea.shp
More examples are given in the individual format pages.
Usage:
ogrtindex [-lnum n]... [-lname name]... [-f output_format] [-write_absolute_path] [-skip_different_projection] output_dataset src_dataset...
The ogrtindex program can be used to create a tileindex - a file containing a list of the identities of a bunch of other files along with there spatial extents. This is primarily intended to be used with MapServer for tiled access to layers using the OGR connection type.
If no -lnum or -lname arguments are given it is assumed that all layers in source datasets should be added to the tile index as independent records.
If the tile index already exists it will be appended to, otherwise it will be created.
It is a flaw of the current ogrtindex program that no attempt is made to copy the coordinate system definition from the source datasets to the tile index (as is expected by MapServer when PROJECTION AUTO is in use).
This example would create a shapefile (tindex.shp) containing a tile index of the BL2000_LINK layers in all the NTF files in the wrk directory:
% ogrtindex tindex.shp wrk/*.NTF
Format Name | Code | Creation | Georeferencing | Compiled by default |
---|---|---|---|---|
Aeronav FAA files | AeronavFAA | No | Yes | Yes |
ESRI ArcObjects | ArcObjects | No | Yes | No, needs ESRI ArcObjects |
Arc/Info Binary Coverage | AVCBin | No | Yes | Yes |
Arc/Info .E00 (ASCII) Coverage | AVCE00 | No | Yes | Yes |
Atlas BNA | BNA | Yes | No | Yes |
AutoCAD DXF | DXF | Yes | No | Yes |
Comma Separated Value (.csv) | CSV | Yes | No | Yes |
CouchDB / GeoCouch | CouchDB | Yes | Yes | No, needs libcurl |
DODS/OPeNDAP | DODS | No | Yes | No, needs libdap |
EDIGEO | EDIGEO | No | Yes | Yes |
ESRI FileGDB | FileGDB | Yes | Yes | No, needs FileGDB API library |
ESRI Personal GeoDatabase | PGeo | No | Yes | No, needs ODBC library |
ESRI ArcSDE | SDE | No | Yes | No, needs ESRI SDE |
ESRI Shapefile | ESRI Shapefile | Yes | Yes | Yes |
FMEObjects Gateway | FMEObjects Gateway | No | Yes | No, needs FME |
GeoJSON | GeoJSON | Yes | Yes | Yes |
Geoconcept Export | Geoconcept | Yes | Yes | Yes |
Geomedia .mdb | Geomedia | No | No | No, needs ODBC library |
GeoRSS | GeoRSS | Yes | Yes | Yes (read support needs libexpat) |
Google Fusion Tables | GFT | Yes | Yes | No, needs libcurl |
GML | GML | Yes | Yes | Yes (read support needs Xerces or libexpat) |
GMT | GMT | Yes | Yes | Yes |
GPSBabel | GPSBabel | Yes | Yes | Yes (needs GPSBabel and GPX driver) |
GPX | GPX | Yes | Yes | Yes (read support needs libexpat) |
GRASS | GRASS | No | Yes | No, needs libgrass |
GPSTrackMaker (.gtm, .gtz) | GPSTrackMaker | Yes | Yes | Yes |
Hydrographic Transfer Format | HTF | No | Yes | Yes |
Informix DataBlade | IDB | Yes | Yes | No, needs Informix DataBlade |
INTERLIS | "Interlis 1" and "Interlis 2" | Yes | Yes | No, needs Xerces (INTERLIS model reading needs ili2c.jar) |
INGRES | INGRES | Yes | No | No, needs INGRESS |
KML | KML | Yes | Yes | Yes (read support needs libexpat) |
LIBKML | LIBKML | Yes | Yes | No, needs libkml |
MapInfo File | MapInfo File | Yes | Yes | Yes |
Microstation DGN | DGN | Yes | No | Yes |
Access MDB (PGeo and Geomedia capable) | MDB | No | Yes | No, needs JDK/JRE |
Memory | Memory | Yes | Yes | Yes |
MySQL | MySQL | No | Yes | No, needs MySQL library |
NAS - ALKIS | NAS | No | Yes | No, needs Xerces |
Oracle Spatial | OCI | Yes | Yes | No, needs OCI library |
ODBC | ODBC | No | Yes | No, needs ODBC library |
MS SQL Spatial | MSSQLSpatial | Yes | Yes | No, needs ODBC library |
OGDI Vectors (VPF, VMAP, DCW) | OGDI | No | Yes | No, needs OGDI library |
OpenAir | OpenAir | No | Yes | Yes |
PCI Geomatics Database File | PCIDSK | No | No | Yes, using internal PCIDSK SDK (from GDAL 1.7.0) |
PDS | PDS | No | Yes | Yes |
PGDump | PostgreSQL SQL dump | Yes | Yes | Yes |
PostgreSQL/PostGIS | PostgreSQL/PostGIS | Yes | Yes | No, needs PostgreSQL client library (libpq) |
EPIInfo .REC | REC | No | No | Yes |
S-57 (ENC) | S57 | No | Yes | Yes |
SDTS | SDTS | No | Yes | Yes |
Norwegian SOSI Standard | SOSI | No | Yes | No, needs FYBA library |
SQLite/SpatiaLite | SQLite | Yes | Yes | No, needs libsqlite3 or libspatialite |
SUA | SUA | No | Yes | Yes |
SVG | SVG | No | Yes | No, needs libexpat |
UK .NTF | UK. NTF | No | Yes | Yes |
U.S. Census TIGER/Line | TIGER | No | Yes | Yes |
VFK data | VFK | No | Yes | Yes |
VRT - Virtual Datasource | VRT | No | Yes | Yes |
OGC WFS (Web Feature Service) | WFS | Yes | Yes | No, needs libcurl |
X-Plane/Flighgear aeronautical data | XPLANE | No | Yes | Yes |
This driver reads text files describing aeronav information - obstacles, navaids and routes - as provided by the FAA.
The OGR ArcObjects driver provides read-only access to ArcObjects based datasources. Since it uses the ESRI SDK, it has the requirement of needing an ESRI license to run. Nevertheless, this also means that the driver has full knowledge of ESRI abstractions. Among these, you have:
Although it has not been extended to do this yet (there hasn't been a need), it can potentially also support the following GeoDatabase Abstractions
You can try those above and they may work - but they have not been tested. Note the abstractions above cannot be supported with the Open FileGeoDatabase API.
Prefix the Datasource with "AO:"
ogr2ogr -overwrite -skipfailures -f "PostgreSQL"
PG:"host=myhost user=myuser dbname=mydb password=mypass"
AO:"C:\somefolder\BigFileGDB.gdb" "MyFeatureClass"
ogrinfo -al AO:"C:\somefolder\PersonalGDB.mdb"
ogrinfo -al AO:"C:\somefolder\MySDEConnection.sde"
Read the GDAL Windows Building example for Plugins. You will find a similar section in nmake.opt for ArcObjects. After you are done, go to the $gdal_source_root\ogr\ogrsf_frmts\arcobjects folder and execute:
nmake /f makefile.vc plugin nmake /f makefile.vc
plugin-install
Date and blob fields have not been implemented. It is probably just a few lines of code, I just have not had time (or need) to do it.
The label, arc, polygon, centroid, region and text sections of a coverage are all supported as layers. Attributes from INFO are appended to labels, arcs, polygons or region where appropriate. When available the projection information is read and translated. Polygon geometries are collected for polygon and region layers from the composing arcs.
Text sections are represented as point layers. Display height is preserved in the HEIGHT attribute field; however, other information about text orientation is discarded.
Info tables associated with a coverage, but not sepecifically named to be attached to one of the existing geometric layers is currently not accessable through OGR. Note that info tables are stored in an 'info' directory at the same level as the coverage directory. If this is inaccessable or corrupt no info attributes will be appended to coverage layers, but the geometry should still be accessable.
If the directory contains files with names like w001001.adf then the coverage is a grid coverage suitable to read with GDAL, not a vector coverage supported by OGR.
The layers are named as follows:
Random (by FID) reads of arcs, and polygons is supported it may not be supported for other feature types.
The label, arc, polygon, centroid, region and text sections of a coverage are all supported as layers. Attributes from INFO are appended to labels, arcs, polygons or region where appropriate. When available the projection information is read and translated. Polygon geometries are collected for polygon and region layers from the composing arcs.
Text sections are represented as point layers. Display height is preserved in the HEIGHT attribute field; however, other information about text orientation is discarded.
Info tables associated with a coverage, but not sepecifically named to be attached to one of the existing geometric layers is currently not accessable through OGR. Note that info tables are stored in an 'info' directory at the same level as the coverage directory. If this is inaccessable or corrupt no info attributes will be appended to coverage layers, but the geometry should still be accessable.
The layers are named as follows:
OGR has support for BNA reading and writing.
The OGR driver supports reading and writing of all the BNA feature types :
To be recognized as BNA, the file extension must be ".bna". When reading a BNA file, the driver will scan it entirely to find out which layers are available. If the file name is foo.bna, the layers will be named foo_points, foo_polygons, foo_lines and foo_ellipses.
The BNA driver support reading of polygons with holes or lakes. It determines what is a hole or a lake only from geometrical analysis (inclusion, non-intersection tests) and ignores completely the notion of polygon winding (whether the polygon edges are described clockwise or counter-clockwise). GDAL must be built with GEOS enabled to make geometry test work. Polygons are exposed as multipolygons in the OGR Simple Feature model.
Ellipses and circles are discretized as polygons with 360 points.
If the output file already exits, the writing will not occur. You have to delete the existing file first.
The BNA writer supports the following creation options (dataset options):
ogrinfo -ro -al a_bna_file.bna
The ogr2ogr utility can be used to do BNA to BNA translation :
ogr2ogr -f BNA -dsco "NB_IDS=2" -dsco "ELLIPSES_AS_ELLIPSES=NO" output.bna input.bna
This driver can connect to the a CouchDB service, potentially enabled with the GeoCouch spatial extension.
GDAL/OGR must be built with Curl support in order to the CouchDB driver to be compiled.
The driver supports read and write operations.
OGR handles preferably CouchDB documents following the GeoJSON specification.
couchdb:http://example.com[/layername]where http://example.com points to the root of a CouchDB repository and, optionaly, layername is the name of a CouchDB database.
It is also possible to directly open a view :
couchdb:http://example.com/layername/_design/adesigndoc/_view/aview[?include_docs=true]The include_docs=true might be needed depending on the value returned by the emit() call in the map() function.
By default, the driver will try the following spatial filter function "_design/ogr_spatial/_spatial/spatial", which is the valid spatial filter function for layers created by OGR. If that filter function does not exist, but another one exists, you can specify it with the COUCHDB_SPATIAL_FILTER configuration option.
Note that the first time an attribute request is issued, it might require write permissions in the database to create a new index view.
Write support is only enabled when the datasource is opened in update mode.
When inserting a new feature with CreateFeature(), and if the command is successfull, OGR will fetch the returned _id and _rev and use them.
It is possible to surround the CreateFeature()/SetFeature() operations between OGRLayer::StartTransaction() and OGRLayer::CommitTransaction(). The operations will be stored into memory and only executed at the time CommitTransaction() is called.
ogrinfo -ro "couchdb:http://some_account.some_couchdb_server.com"
ogr2ogr -f couchdb "couchdb:http://some_account.some_couchdb_server.com" shapefile.shp
OGR supports reading and writing primarily non-spatial tabular data stored in text CSV files. CSV files are a common interchange format between software packages supporting tabular data and are also easily produced manually with a text editor or with end-user written scripts or programs.
While in theory .csv files could have any extension, in order to auto-recognise the format OGR only supports CSV files ending with the extention ".csv". The datasource name may be either a single CSV file or to a directory. For a directory to be recognised as a .csv datasource at least half the files in the directory need to have the extension .csv. One layer (table) is produced from each .csv file accessed.
Starting with GDAL 1.8.0, for files structured as CSV, but not ending with .CSV extension, the 'CSV:' prefix can be added before the filename to force loading by the CSV driver.
The OGR CSV driver supports reading and writing. Because the CSV format has variable length text lines, reading is done sequentially. Reading features in random order will generally be very slow. OGR CSV layer never have any coordinate system. When reading a field named "WKT" is assumed to contain WKT geometry, but also is treated as a regular field. The OGR CSV driver returns all attribute columns with a type of string if no field type information file (with .csvt extension) is available.
Limited type recognition can be done for Integer, Real, String, Date (YYYY-MM-DD), Time (HH:MM:SS+nn) and DateTime (YYYY-MM-DD HH:MM:SS+nn) columns through a descriptive file with same name as the CSV file, but .csvt extension. In a single line the types for each column have to be listed: double ed and comma separated (e.g., "Integer","String"). It is also possible to specify explicitely the width and precision of each column, e.g. "Integer(5)","Real(10.7)","String(15)". The driver will then use these types as specified for the csv columns.
CSV files have one line for each feature (record) in the layer (table). The attribute field values are separated by commas. At least two fields per line must be present. Lines may be terminated by a DOS (CR/LF) or Unix (LF) style line terminators. Each record should have the same number of fields. Starting with GDAL 1.7.0, the driver will also accept a semicolon or a tabulation character as field separator . This autodetection will work only if there's no other potential separator on the first line of the CSV file. Otherwise it will default to comma as separator.
Complex attribute values (such as those containing commas, es or newlines) may be placed in double es. Any occurances of double es within the ed string should be doubled up to "escape" them.
The driver attempts to treat the first line of the file as a list of field names for all the fields. However, if one or more of the names is all numeric it is assumed that the first line is actually data values and dummy field names are generated internally (field_1 through field_n) and the first record is treated as a feature.
Example (employee.csv):ID,Salary,Name,Comments 132,55000.0,John Walker,"The ""big"" cheese." 133,11000.0,Jane Lake,Cleaning Staff
Note that the Comments value for the first data record is placed in double es because the value contains es, and those es have to be doubled up so we know we haven't reached the end of the ed string yet.
Many variations of textual input are sometimes called Comma Separated Value files, including files without commas, but fixed column widths, those using tabs as seperators or those with other auxilary data defining field types or structure. This driver does not attempt to support all such files, but instead to support simple .csv files that can be auto-recognised. Scripts or other mechanisms can generally be used to convert other variations into a form that is compatible with the OGR CSV driver.
It is possible to extract spatial information (points) from a CSV file which has columns for the X and Y coordinates, through the use of the VRT driver
Consider the following CSV file (test.csv):
Latitude,Longitude,Name 48.1,0.25,"First point" 49.2,1.1,"Second point" 47.5,0.75,"Third point"You can write the associated VRT file (test.vrt):
<OGRVRTDataSource> <OGRVRTLayer name="test"> <SrcDataSource>test.csv</SrcDataSource> <GeometryType>wkbPoint</GeometryType> <LayerSRS>WGS84</LayerSRS> <GeometryField encoding="PointFromColumns" x="Longitude" y="Latitude"/> </OGRVRTLayer> </OGRVRTDataSource>and ogrinfo -ro -al test.vrt will return :
OGRFeature(test):1 Latitude (String) = 48.1 Longitude (String) = 0.25 Name (String) = First point POINT (0.25 48.1 0) OGRFeature(test):2 Latitude (String) = 49.2 Longitude (String) = 1.1 Name (String) = Second point POINT (1.1 49.200000000000003 0) OGRFeature(test):3 Latitude (String) = 47.5 Longitude (String) = 0.75 Name (String) = Third point POINT (0.75 47.5 0)
The driver supports creating new databases (as a directory of .csv files), adding new .csv files to an existing directory or .csv files or appending features to an existing .csv table. Deleting or replacing existing features is not supported.
Layer Creation options:
ogr2ogr -f CSV output.csv input.shp -lco GEOMETRY=AS_XYZ
DGN files are considered to have no georeferencing information through OGR. Features will all have the following generic attributes:
The various kinds of linear geomtries will carry style information indicating the color, thickness and line style (ie. dotted, solid, etc).
Polygons (Shape elements) will carry styling information for the edge as well as a fill color if provided. Fill patterns are not supported.
Text elements will contain the text, angle, color and size information (expressed in ground units) in the style string.
When opening a database, it's name should be specified in the form "DODS:url". The URL may include a constraint expression a shown here. Note that it may be necessary to e or otherwise protect DODS URLs on the commandline if they include question mark or ampersand characters as these often have special meaning to command shells.
DODS:http://dods.gso.uri.edu/dods-3.4/nph-dods/broad1999?&press=148By default top level Sequence, Grid and Array objects will be translated into corresponding layers. Sequences are (by default) treated as point layers with the point geometries picked up from lat and lon variables if available. To provide more sophisticated translation of sequence, grid or array items into features it is necessary to provide additional information to OGR as DAS (dataset auxilary informaton) either from the remote server, or locally via the AIS mechanism.
A DAS definition for an OGR layer might look something like:
Attributes { ogr_layer_info { string layer_name WaterQuality; string spatial_ref WGS84; string target_container Water_Quality; layer_extents { Float64 x_min -180; Float64 y_min -90; Float64 x_max 180; Float64 y_max 90; } x_field { string name YR; string scope dds; } y_field { string name JD; string scope dds; } } }
DXF files are considered to have no georeferencing information through OGR. Features will all have the following generic attributes:
The approximation of arcs, ellipses, circles and rounded polylines as linestrings is done by splitting the arcs into subarcs of no more than a threshhold angle. This angle is the OGR_ARC_STEPSIZE. This defaults to four degrees, but may be overridden by setting the configuration variable OGR_ARC_STEPSIZE.
In some cases the $DWGCODEPAGE setting in a DXF file will be wrong, or unrecognised by OGR. It could be edited manually, or the DXF_ENCODING configuration variable can be used to override what id will be used by OGR in transcoding. The value of DXF_ENCODING should be an encoding name supported by CPLRecode() (ie. an iconv name), not a DXF $DWGCODEPAGE name. Using a DXF_ENCODING name of "UTF-8" will avoid any attempt to recode the text as it is read.
Point features with LABEL styling are written as MTEXT entities based on the styling information.
Point features without LABEL styling are written as POINT entities.
LineString, MultiLineString, Polygon and MultiPolygons are written as one or more LWPOLYLINE entities, closed in the case of polygon rings. An effort is made to preserve line width and color.
The dataset creation supports the following dataset creation options:
This driver reads files encoded in the French EDIGEO exchange format, a text based file format aimed at exchanging geographical information between GIS, with powerful description capabilities, topology modelling, etc.
The driver has been developed to read files of the French PCI (Plan Cadastral Informatisé - Digital Cadastral Plan) as produced by the DGI (Direction Générale des Impots - General Tax Office). The driver should also be able to open other EDIGEO based products.
The driver must be provided with the .THF file describing the EDIGEO exchange and it will read the associated .DIC, .GEO, .SCD, .QAL and .VEC files.
In order the SRS of the layers to be correctly built, the IGNF file that contains the defintion of IGN SRS must be placed in the directory of PROJ.4 ressource files.
The whole set of files will be parsed into memory. This may be a limitation if dealing with big EDIGEO exchanges.
It will also add the following fields :
By default, OGR will create specific layers (xxx_LABEL) to dispatch into the various labels of the ID_S_OBJ_Z_1_2_2 layer according to the value of xxx=OBJ_OBJ_LNK_LAYER. This can be disabled by setting OGR_EDIGEO_CREATE_LABEL_LAYERS to NO.
The FileGDB driver provides read and write access to File Geodatabases (.gdb directories) created by ArcGIS 10 and above.
None.
ogr2ogr -overwrite -skipfailures -f "PostgreSQL"
PG:"host=myhost user=myuser dbname=mydb password=mypass"
"C:\somefolder\BigFileGDB.gdb" "MyFeatureClass"
ogrinfo -al "C:\somefolder\MyGDB.gdb"
Read the GDAL Windows Building example for Plugins. You will find a similar section in nmake.opt for FileGDB. After you are done, go to the $gdal_source_root\ogr\ogrsf_frmts\filegdb folder and execute:
nmake /f makefile.vc plugin nmake /f makefile.vc
plugin-install
To using the FMEObjects based readers the data source name passed should be the name of the FME reader to use, a colon and then the actual data source name (ie. the filename). For instance, "NTF:F:\DATA\NTF\2144.NTF" would indicate the NTF reader should be used to read the file There are a number of special cases:
The cached featur files will have the prefix FME_OLEDB_ and a master index is kept in the file ogrfmeds.ind. To clear away the index delete all these files. Do not just delete some.
By default features in the cache are re-read after 3600s (60 minutes). Cache rentention times can be altered at compile time by altering the fme2ogr.h include file.
Input from the SDE and ORACLE readers are not cached. These sources are treated specially in a number of other ways as well.
The FMEObjects gateway has been tested on Linux and Windows.
The OGR GeoConcept driver treats a single GeoConcept file within a
directory as a dataset comprising layers. GeoConcept files extensions
are .txt
or .gxt
.
Currently the GeoConcept driver only supports multi-polygons, lines and points.
It's an object oriented GIS, where the features are named « objects », and feature types are named « type/subtype » (class allowing inheritance).
Among its import/export formats, it proposes a simple text format named gxt. A gxt file may contain objects from several type/subtype.
GeoConcept text export files should be available for writing and reading.
The OGR GeoConcept driver treats a single GeoConcept file within a
directory as a dataset comprising layers. GeoConcept files extensions
are .txt
or .gxt
.
Currently the GeoConcept driver only supports multi-polygons, lines and points.
.txt
or
.gxt
) as a dataset.
GeoConcept files can store multiple kinds of geometry (one by layer), even if a GeoConcept layer can only have one kind of geometry.
Note this makes it very difficult to translate a mixed geometry layer from another format into GeoConcept format using ogr2ogr, since ogr2ogr has no support for separating out geometries from a source layer.
GeoConcept sub-type is treated as OGR feature. The name of a layer
is therefore the concatenation of the GeoConcept type name, '.'
and GeoConcept sub-type name.
GeoConcept type definition (.gct
files) are used for
creation only.
GeoConcept feature fields definition are stored in an associated
.gct
file, and so fields suffer a number of limitations (
FIXME) :
TXT
was used by earlier releases of GeoConcept.
GXT
is currently used.
CONFIG=path to the GCT : the GCT file describe the GeoConcept
types definitions : In this file, every line must start with //#
followed by a keyword. Lines starting with //
are
comments.
It is important to note that a GeoConcept export file can hold different types and associated sub-types.
//#SECTION
CONFIG
and ends with //#ENDSECTION CONFIG
. All the
configuration is enclosed within these marks.//#SECTION MAP
and ends
with //#ENDSECTION MAP
.Name
) and an ID (keyword ID
).
A type holds sub-types and fields. This section starts with
//#SECTION TYPE
and ends with //#ENDSECTION TYPE
.
Name
), an
ID (keyword ID
), a type of geometry (keyword Kind
) and a dimension. The following types of geometry are supported :
POINT, LINE, POLYGON. The current release of this driver does not
support the TEXT geometry. The dimension can be 2D, 3DM or 3D. A
sub-type holds fields. This section starts with //#SECTION
SUBTYPE
and ends with //#ENDSECTION SUBTYPE
.
Name
), an ID (keyword ID
), a type (keyword
Kind
). The following types of fields are supported : INT, REAL,
MEMO, CHOICE, DATE, TIME, LENGTH, AREA. This section starts with
//#SECTION FIELD
and ends with //#ENDSECTION FIELD
.
Identifier
, Class
, Subclass
, Name
, NbFields
, X
, Y
, XP
, YP
, Graphics
, Angle
.Identifier
, Class
,
Subclass
, Name
, X
, Y
.XP
, YP
.
Graphics
must be declared.
Angle
may be declared.-nln
option.TYPE
corresponds to one of the Name
found
in the GCT file for a type section. The SUBTYPE
corresponds to one of the Name
found in the GCT file for a
sub-type section within the previous type section.
At the present moment, coordinates are written with 2 decimales for cartesian spatial reference systems (including height) or with 9 decimales for geographical spatial reference systems.
//#SECTION CONFIG //#SECTION MAP //# Name=SCAN1000-TILES-LAMB93 //# Unit=m //# Precision=1000 //#ENDSECTION MAP //#SECTION TYPE //# Name=TILE //# ID=10 //#SECTION SUBTYPE //# Name=TILE //# ID=100 //# Kind=POLYGON //# 3D=2D //#SECTION FIELD //# Name=IDSEL //# ID=101 //# Kind=TEXT //#ENDSECTION FIELD //#SECTION FIELD //# Name=NOM //# ID=102 //# Kind=TEXT //#ENDSECTION FIELD //#SECTION FIELD //# Name=WITHDATA //# ID=103 //# Kind=INT //#ENDSECTION FIELD //#ENDSECTION SUBTYPE //#ENDSECTION TYPE //#SECTION FIELD //# Name=@Identifier //# ID=-1 //# Kind=INT //#ENDSECTION FIELD //#SECTION FIELD //# Name=@Class //# ID=-2 //# Kind=CHOICE //#ENDSECTION FIELD //#SECTION FIELD //# Name=@Subclass //# ID=-3 //# Kind=CHOICE //#ENDSECTION FIELD //#SECTION FIELD //# Name=@Name //# ID=-4 //# Kind=TEXT //#ENDSECTION FIELD //#SECTION FIELD //# Name=@X //# ID=-5 //# Kind=REAL //#ENDSECTION FIELD //#SECTION FIELD //# Name=@Y //# ID=-6 //# Kind=REAL //#ENDSECTION FIELD //#SECTION FIELD //# Name=@Graphics //# ID=-7 //# Kind=REAL //#ENDSECTION FIELD //#ENDSECTION CONFIG
//$DELIMITER &; &; //$ED-TEXT &;no&; //$CHARSET ANSI //$UNIT Distance=m //$FORMAT 2 //$SYSCOORD {Type: 2001} //$FIELDS Class=TILE;Subclass=TILE;Kind=4;Fields=Private#Identifier Private#Class Private#Subclass -1 TILE TILE TILE 3 000-2007-0050-7130-LAMB93 0 50000.00 -1 TILE TILE TILE 3 000-2007-0595-7130-LAMB93 0 595000.00 -1 TILE TILE TILE 3 000-2007-0595-6585-LAMB93 0 595000.00 1 TILE TILE TILE 3 000-2007-1145-6250-LAMB93 0 1145000.00 1 TILE TILE TILE 3 000-2007-0050-6585-LAMB93 0 50000.00 Private#Name Private#NbFields IDSEL NOM WITHDATA Private#X Private#Y Private#Graphics 4 600000.00 7130000.00 600000.00 6580000.00 50000.00 6580000.00 50000.00 7130000.00 4 1145000.00 7130000.00 1145000.00 6580000.00 595000.00 6580000.00 595000.00 7130000.00 4 1145000.00 6585000.00 1145000.00 6035000.00 595000.00 6035000.00 595000.00 6585000.00 4 1265000.00 6250000.00 1265000.00 6030000.00 1145000.006030000.00 1145000.00 6250000.00 4 600000.00 6585000.00 600000.00 6035000.00 50000.00 6035000.00 50000.00 6585000.00
ogr2ogr -f Geoconcept -a_srs +init=IGNF:LAMB93 -dsco EXTENSION=txt -dsco CONFIG=tile_schema.gct tile.gxt tile.shp -lco FEATURETYPE=TILE.TILEAppending new features to an existing GeoConcept export file :
ogr2ogr -f Geoconcept -update -append tile.gxt tile.shp -nln TILE.TILETranslating a GeoConcept export file layer into MapInfo file :
ogr2ogr -f MapInfo File -dsco FORMAT=MIF tile.mif tile.gxt TILE.TILE
This driver implements read/write support for access to features encoded in GeoJSON format. The GeoJSON is a dialect based on the JavaScript Object Notation (JSON). The JSON is a lightweight plain text format for data interchange and GeoJSON is nothing other than its specialization for geographic content.
At the moment of writing this, GeoJSON is supported as output format of services implemented by FeatureServer, GeoServer and CartoWeb.
The OGR GeoJSON driver translates a GeoJSON encoded data to objects of OGR Simple Features model: Datasource, Layer, Feature, Geometry. The implementation is based on GeoJSON Specification draft, v5.0.
Starting with OGR 1.8.0, the GeoJSON driver can read the JSON output of Feature Service request following the GeoServices REST Specification, like implemented by ArcGIS Server REST API
The OGR GeoJSON driver accepts three types of sources of data:
A GeoJSON datasource is translated to single OGRLayer object with pre-defined name OGRGeoJSON:
ogrinfo -ro http://featureserver/data/.geojson OGRGeoJSONIt's also valid to assume that OGRDataSource::GetLayerCount() for GeoJSON datasource always returns 1.
Accessing Web Service as a datasource (ie. FeatureServer), each request will produce new layer. This behavior conforms to stateless nature of HTTP transaction and is similar to how Web browsers operate: single request == single page.
If a top-level member of GeoJSON data is of any other type than FeatureCollection, the driver will produce a layer with only one feature. Otherwise, a layer will consists of a set of features.
The OGR GeoJSON driver maps each object of following types to new OGRFeature object: Point, LineString, Polygon, GeometryCollection, Feature.
According to the GeoJSON Specification, only the Feature object must have a member with name properties. Each and every member of properties is translated to OGR object of type of OGRField and added to corresponding OGRFeature object.
The GeoJSON Specification does not require all Feature objects in a collection must have the same schema of properties. If Feature objects in a set defined by FeatureCollection object have different schema of properties, then resulting schema of fields in OGRFeatureDefn is generated as union of all Feature properties.
It is possible to tell the driver to not to process attributes by setting environment variable ATTRIBUTES_SKIP=YES. Default behavior is to preserve all attributes (as an union, see previous paragraph), what is equal to setting ATTRIBUTES_SKIP=NO.
Similarly to the issue with mixed-properties features, the GeoJSON Specification draft does not require all Feature objects in a collection must have geometry of the same type. Fortunately, OGR objects model does allow to have geometries of different types in single layer - a heterogeneous layer. By default, the GeoJSON driver preserves type of geometries.
However, sometimes there is a need to generate homogeneous layer from a set of heterogeneous features. For this purpose, it's possible to tell the driver to wrap all geometries with OGRGeometryCollection type as a common denominator. This behavior may be controlled by setting environment variable GEOMETRY_AS_COLLECTION=YES (default is NO).
How to dump content of .geojson file:
ogrinfo -ro point.geojson
How to query features from remote service with filtering by attribute:
ogrinfo -ro http://featureserver/cities/.geojson OGRGeoJSON -where "name=Warsaw"
How to translate number of features queried from FeatureServer to ESRI Shapefile:
ogr2ogr -f "ESRI Shapefile" cities.shp http://featureserver/cities/.geojson OGRGeoJSON
Read the result of a FeatureService request against a GeoServices REST server:
ogrinfo -ro -al "http://sampleserver3.arcgisonline.com/ArcGIS/rest/services/Hydrography/ Watershed173811/FeatureServer/0/query?where=objectid+%3D+objectidoutfields=*f=json"
GDAL/OGR >= 1.9.0
OGR optionally supports reading Geomedia .mdb files via ODBC. Geomedia is a Microsoft Access database with a set of tables defined by Intergraph for holding geodatabase metadata, and with geometry for features held in a BLOB column in a custom format. This drivers accesses the database via ODBC but does not depend on any Intergraph middle-ware.
Geomedia .mdb are accessed by passing the file name of the .mdb file to be accessed as the data source name. On Windows, no ODBC DSN is required. On Linux, there are problems with DSN-less connection due to incomplete or buggy implementation of this feature in the MDB Tools package, So, it is required to configure Data Source Name (DSN) if the MDB Tools driver is used (check instructions below).
In order to facilitate compatibility with different configurations, the GEOMEDIA_DRIVER_TEMPLATE Config Option was added to provide a way to programmatically set the DSN programmatically with the filename as an argument. In cases where the driver name is known, this allows for the construction of the DSN based on that information in a manner similar to the default (used for Windows access to the Microsoft Access Driver).
OGR treats all feature tables as layers. Most geometry types should be supported (arcs are not yet). Coordinate system information is not currently supported.
Currently the OGR Personal Geodatabase driver does not take advantage of spatial indexes for fast spatial queries.
By default, SQL statements are passed directly to the MDB database engine. It's also possible to request the driver to handle SQL commands with OGR SQL engine, by passing "OGRSQL" string to the ExecuteSQL() method, as name of the SQL dialect.
Starting with GDAL/OGR 1.9.0, the MDB driver is an alternate way of reading Geomedia .mdb files without requiring unixODBC and MDB Tools
Refer to the similar section of the PGeo driver. The prefix to use for this driver is Geomedia:
GeoRSS is a way of encoding location in RSS or Atom feeds.
OGR has support for GeoRSS reading and writing. Read support is only available if GDAL is built with expat library support
The driver supports RSS documents in RSS 2.0 or Atom 1.0 format.
It also supports the 3 ways of encoding location : GeoRSS simple, GeoRSS GML and W3C Geo (the later being deprecated).
The driver can read and write documents without location information as well.
The default datum for GeoRSS document is the WGS84 datum (EPSG:4326). Although that GeoRSS locations are encoded in latitude-longitude order in the XML file, all coordinates reported or expected by the driver are in longitude-latitude order. The longitude/latitude order used by OGR is meant for compatibily with most of the rest of OGR drivers and utilities. For locations encoded in GML, the driver will support the srsName attribute for describing other SRS.
Simple and GML encoding support the notion of a box as a geometry. This will be decoded as a rectangle (Polygon geometry) in OGR Simple Feature model.
A single layer is returned while reading a RSS document. Features are retrieved from the content of <item> (RSS document) or <entry> (Atom document) elements.
The content returned by OGR will be encoded in UTF-8, after the conversion from the encoding mentionned in the file header is.
If your GeoRSS file is not encoded in one of the previous encodings,
it will not be parsed by the GeoRSS driver. You may convert it into one
of the supported encoding with the iconv utility for example and
change accordingly the encoding parameter value in the XML
header.
When writing a GeoRSS file, the driver expects UTF-8 content to be passed in.
The driver will return elements found in the base schema of RSS channel or Atom feeds. It will also return extension elements, that are allowed in namespaces.
Attributes of first level elements will be exposed as fields.
Complex content (elements inside first level elements) will be returned as an XML blob.
When a same element is repeated, a number will be appended at the end of the attribute name for the repetitions. This is usefull for the <category> element in RSS and Atom documents for example.
The following content :
<item> <title>My tile</title> <link>http://www.mylink.org</link> <description>Cool descriprion !</description> <pubDate>Wed, 11 Jul 2007 15:39:21 GMT</pubDate> <guid>http://www.mylink.org/2007/07/11</guid> <category>Computer Science</category> <category>Open Source Software</category> <georss:point>49 2</georss:point> <myns:name type="my_type">My Name</myns:name> <myns:complexcontent> <myns:subelement>Subelement</myns:subelement> </myns:complexcontent> </item>will be interpreted in the OGR SF model as :
title (String) = My title link (String) = http://www.mylink.org description (String) = Cool descriprion ! pubDate (DateTime) = 2007/07/11 15:39:21+00 guid (String) = http://www.mylink.org/2007/07/11 category (String) = Computer Science category2 (String) = Open Source Software myns_name (String) = My Name myns_name_type (String) = my_type myns_complexcontent (String) = <myns:subelement>Subelement</myns:subelement> POINT (2 49)
If the output file already exits, the writing will not occur. You have to delete the existing file first.
A layer that is created cannot be immediately read without closing and reopening the file. That is to say that a dataset is read-only or write-only in the same session.
Supported geometries :
The GeoRSS writer supports the following dataset creation options:
When translating from a source dataset, it may be necessary to rename the field names from the source dataset to the expected RSS or ATOM attribute names, such as <title>, <description>, etc... This can be done with a OGR VRT dataset, or by using the "-sql" option of the ogr2ogr utility (see RFC21: OGR SQL type cast and field name alias)
ogrinfo -ro -al input.xml
ogr2ogr -f GeoRSS output.xml input.xml "select link_href as link, title, content as description, author_name as author, id as guid from georss"
#!/usr/bin/python import gdal import ogr import urllib2 url = 'http://earthquake.usgs.gov/eqcenter/catalogs/eqs7day-M5.xml' content = None try: handle = urllib2.urlopen(url) content = handle.read() except urllib2.HTTPError, e: print 'HTTP service for %s is down (HTTP Error: %d)' % (url, e.code) except: print 'HTTP service for %s is down.' %(url) # Create in-memory file from the downloaded content gdal.FileFromMemBuffer('/vsimem/temp', content) ds = ogr.Open('/vsimem/temp') lyr = ds.GetLayer(0) feat = lyr.GetNextFeature() while feat is not None: print feat.GetFieldAsString('title') + ' ' + feat.GetGeometryRef().ExportToWkt() feat.Destroy() feat = lyr.GetNextFeature() ds.Destroy() # Free memory associated with the in-memory file gdal.Unlink('/vsimem/temp')
This driver can connect to the Google Fusion Tables service. GDAL/OGR must be built with Curl support in order to the GFT driver to be compiled.
The driver supports read and write operations.
GFT:
Additionnal optional parameters can be specified after the ':' sign such as :
The authentication information can be provided by different means :
Fusion tables can have multiple geometry columns per table. By default, OGR will use the first geometry column it finds. It is possible to select another column as the geometry column by specifying table_name(geometry_column_name) as the layer name passed to GetLayerByName().
Write support is only enabled when the datasource is opened in update mode.
The mapping between the operations of the GFT service and the OGR concepts is the following :
It is possible to surround the CreateFeature() oepration between OGRLayer::StartTransaction() and OGRLayer::CommitTransaction(). The operations will be stored into memory and only executed at the time CommitTransaction() is called. Note that the GFT service only supports up to 500 INSERTs and up to 1MB of content per transaction.
Note : only CreateFeature() makes use of OGR transaction mechanism. SetFeature() and DeleteFeature() will still be issued immediately.
The SQL supported by the server understands only natively table id, and not the table names returned by OGR. For conveniency, OGR will "patch" your SQL command to replace the table name by the table id however.
ogrinfo -ro "GFT:email=john.doe@example.com password=secret_password"
ogr2ogr -f GFT "GFT:email=john.doe@example.com password=secret_password" shapefile.shp
ogrinfo -ro "GFT:tables=224453" -al -spat 67 31.5 67.5 32 -where "'Attack on' = 'ENEMY'"
ogrinfo --config CPL_DEBUG ON "GFT:email=john.doe@example.com password=secret_password"returns:
HTTP: Fetch(https://www.google.com/accounts/ClientLogin) HTTP: These HTTP headers were set: Content-Type: application/x-www-form-urlencoded GFT: Auth key : A_HUGE_STRING_WITH_ALPHANUMERIC_AND_SPECIAL_CHARACTERSNow, you can set the GFT_AUTH environment variable to that value and simply use "GFT:" as the DSN.
Supported GML flavors :
OGR version | Read | Write |
---|---|---|
OGR >= 1.8.0 | GML2 and GML3 that can
be translated into simple feature model | GML 2.1.2 or GML 3
SF-0
(GML 3.1.1 Compliance level SF-0) |
OGR < 1.8.0 | GML2 and limited GML3 | GML 2.1.2 |
Note: starting with OGR 1.9.0, if both Xerces and Expat are available at build time, the GML driver will preferentially select at runtime the Expat parser for cases where it is possible (GML file in a compatible encoding), and default back to Xerces parser in other cases. However, the choice of the parser can be overriden by specifying the GML_PARSER configuration option to EXPAT or XERCES.
There also situations where the srsName is of the form "EPSG:XXXX" (whereas "urn:ogc:def:crs:EPSG::XXXX" would have been more explicit on the intent) and the coordinates in the file are in (latitude, longitude) order. By default, OGR will not consider the EPSG axis order and will report the coordinates in (latitude,longitude) order. However, if you set the configuration option GML_CONSIDER_EPSG_AS_URN to YES, the rules explained in the previous paragraph will be applied.
The first time a GML file is opened, if the associated .xsd is absent or could not been parsed correctly, it is completely scanned in order to determine the set of featuretypes, the attributes associated with each and other dataset level information. This information is stored in a .gfs file with the same basename as the target gml file. Subsequent accesses to the same GML file will use the .gfs file to predefine dataset level information accelerating access. To a limited extent the .gfs file can be manually edited to alter how the GML file will be parsed. Be warned that the .gfs file will be ignored if the associated .gml file has a newer timestamp.
When prescanning the GML file to determine the list of feature types, and fields, the contents of fields are scanned to try and determine the type of the field. In some applications it is easier if all fields are just treated as string fields. This can be accomplished by setting the configuration option GML_FIELDTYPES to the value ALWAYS_STRING.
OGR 1.8.0 adds support for detecting feature attributes in nested GML elements (non-flat attribute hierarchy) that can be found in some GML profiles such as UK Ordnance Survey MasterMap. OGR 1.8.0 also brings support for reading IntegerList, RealList and StringList field types when a GML element has several occurences.
Since OGR 1.8.0, a specialized GML driver - the NAS driver - is available to read German AAA GML Exchange Format (NAS/ALKIS).
Configuration options can be set via the CPLSetConfigOption() function or as environment variables.
Starting with OGR 1.8.0, the user can change the .gfs file to select the appropriate geometry by specifying its path with the <GeometryElementPath> element. See the description of the .gfs syntax below.
OGR 1.8.0 adds support for more GML geometries including TopoCurve, TopoSurface, MultiCurve. The TopoCurve type GML geometry can be interpreted as either of two types of geometries. The Edge elements in it contain curves and their corresponding nodes. By default only the curves, the main geometries, are reported as OGRMultiLineString. To retrieve the nodes, as OGRMultiPoint, the configuration option GML_GET_SECONDARY_GEOM should be set to the value YES. When this is set only the secondary geometries are reported.
By default the resolved file will be saved in the same directory as the original file with the extension ".resolved.gml", if it doesn't exist already. This behaviour can be changed using the configuration option GML_SAVE_RESOLVED_TO. Set it to SAME to overwrite the original file. Set it to a filename ending with .gml to save it to that location. Any other values are ignored. If the resolver cannot write to the file for any reason, it will try to save it to a temperary file generated using CPLGenerateTempFilename("ResolvedGML"); if it cannot, resolution fails.
Note that the resolution algorithm is not optimised for large files. For files with more than a couple of thousand xlink:href tags, the process can go beyond a few minutes. A rough progress is displayed through CPLDebug() for every 256 links. It can be seen by setting the environment variable CPL_DEBUG. The resolution time can be reduced if you know any elements that won't be needed. Mention a comma seperated list of names of such elements with the configuration option GML_SKIP_RESOLVE_ELEMS. Set it to ALL to skip resolving altogether (default action). Set it to NONE to resolve all the xlinks.
The content returned by OGR will be encoded in UTF-8, after the conversion from the encoding mentionned in the file header is.
If the GML file is not encoded in one of the previous encodings and
the only parser available is Expat, it will not be parsed by the GML
driver. You may convert it into one of the supported encodings with the
iconv utility for example and change accordingly the encoding
parameter value in the XML header.
When writing a GML file, the driver expects UTF-8 content to be passed in.
Starting with OGR 1.9.0, the driver autodetects the presence of a fid (GML2) (resp. gml:id (GML3)) attribute at the beginning of the file, and, if found, exposes it by default as a fid (resp. gml_id) field. The autodetection can be overriden by specifying the GML_EXPOSE_FID or GML_EXPOSE_GML_ID configuration option to YES or NO.
Starting with OGR 1.9.0, when creating a GML2 document, if a field is called fid, its content will also be used to write the content of the fid attribute of the created feature.
Starting with OGR 1.9.0, the GML_READ_MODE configuration option can be set to SEQUENTIAL_LAYERS if all features belonging to the same layer are written sequentially in the file. The reader will then avoid unnecessary resets when layers are read completely one after the other. To get the best performance, the layers must be read in the order they appear in the file.
If no .xsd and .gfs files are found, the parser will detect the layout of layers when building the .gfs file. If the layers are found to be sequential, a <SequentialLayers>true</SequentialLayers> element will be written in the .gfs file, so that the GML_READ_MODE will be automatically initialized to MONOBLOCK_LAYERS if not explicitely set by the user.
Starting with OGR 1.9.0, the GML_READ_MODE configuration option can be set to INTERLEAVED_LAYERS to be able to read a GML file whose features from different layers are interleaved. In the case, the semantics of the GetNextFeature() will be slightly altered, in a way where a NULL return does not necessarily mean that all features from the current layer have been read, but it could also mean that there is still a feature to read, but that belongs to another layer. In that case, the file should be read with code similar to the following one :
int nLayerCount = poDS->GetLayerCount(); int bFoundFeature; do { bFoundFeature = FALSE; for( int iLayer = 0; iLayer < nLayerCount; iLayer++ ) { OGRLayer *poLayer = poDS->GetLayer(iLayer); OGRFeature *poFeature; while((poFeature = poLayer->GetNextFeature()) != NULL) { bFoundFeature = TRUE; poFeature->DumpReadable(stdout, NULL); OGRFeature::DestroyFeature(poFeature); } } } while (bInterleaved bFoundFeature);
The GML writer supports the following dataset creation options:
<?xml version="1.0" encoding="UTF-8"?> <gml:FeatureCollection xmlns:gml="http://www.opengis.net/gml"> <gml:featureMember> <LAYER> <attrib1>attrib1_value</attrib1> <attrib2container> <attrib2>attrib2_value</attrib2> </attrib2container> <location1container> <location1> <gml:Point><gml:coordinates>3,50</gml:coordinates></gml:Point> </location1> </location1container> <location2> <gml:Point><gml:coordinates>2,49</gml:coordinates></gml:Point> </location2> </LAYER> </gml:featureMember> </gml:FeatureCollection>and the following associated .gfs file.
<GMLFeatureClassList> <GMLFeatureClass> <Name>LAYER</Name> <ElementPath>LAYER</ElementPath> <GeometryElementPath>location1container|location1</GeometryElementPath> <PropertyDefn> <Name>attrib1</Name> <ElementPath>attrib1</ElementPath> <Type>String</Type> <Width>13</Width> </PropertyDefn> <PropertyDefn> <Name>attrib2</Name> <ElementPath>attrib2container|attrib2</ElementPath> <Type>String</Type> <Width>13</Width> </PropertyDefn> </GMLFeatureClass> </GMLFeatureClassList>Note the presence of the '|' character in the <ElementPath> and <GeometryElementPath> elements to specify the wished field/geometry element that is a nested XML element. Nested field elements are only supported from OGR 1.8.0, as well as specifying <GeometryElementPath> If GeometryElementPath is not specified, the GML driver will use the last recognized geometry element.
The output of ogrinfo test.gml -ro -al is:
Layer name: LAYER Geometry: Unknown (any) Feature Count: 1 Extent: (3.000000, 50.000000) - (3.000000, 50.000000) Layer SRS WKT: (unknown) Geometry Column = location1container|location1 attrib1: String (13.0) attrib2: String (13.0) OGRFeature(LAYER):0 attrib1 (String) = attrib1_value attrib2 (String) = attrib2_value POINT (3 50)
ogr2ogr -f GML output.gml OCI:usr/pwd@db my_feature -where "id = 0"
The ogr2ogr utility can be used to dump the results of a PostGIS query to GML:
ogr2ogr -f GML output.gml PG:'host=myserver dbname=warmerda' -sql "SELECT pop_1994 from canada where province_name = 'Alberta'"
The GPSBabel driver for now that relies on the GPSBabel utility to access various GPS file formats.
The GPSBabel executable must be accessible through the PATH.
The returned layers can be waypoints, routes, route_points, tracks, track_points depending on the input data.
The syntax to specify an input datasource is : GPSBabel:gpsbabel_file_format[,gpsbabel_format_option]*:[features=[waypoints,][tracks,][routes]:]filename where :
Alternatively, for a few selected GPSBabel formats, just specifying the filename might be sufficient. The list includes for now :
The USE_TEMPFILE=YES configuration option can be used to create an on-disk temporary GPX file instead of a in-memory one, when reading big amount of data.
The support geometries, options and other creation issues are the ones of the GPX driver. Please refer to its documentation for more details.
The syntax to specify an output datasource is : GPSBabel:gpsbabel_file_format[,gpsbabel_format_option]*:filename where :
Alternatively, you can just pass a filename as output datasource name and specify the dataset creation option GPSBABEL_DRIVER=gpsbabel_file_format[,gpsbabel_format_option]*
The USE_TEMPFILE=YES configuration option can be used to create an on-disk temporary GPX file instead of a in-memory one, when writing big amount of data.
ogrinfo -ro -al GPSBabel:garmin:usb:
ogr2ogr -f GPSBabel GPSBabel:mapsend:out.mapsend in.shp
GPX (the GPS Exchange Format) is a light-weight XML data format for the interchange of GPS data (waypoints, routes, and tracks) between applications and Web services on the Internet.
OGR has support for GPX reading (if GDAL is build with expat library support) and writing.
Version supported are GPX 1.0 and 1.1 for reading, GPX 1.1 for writing.
The OGR driver supports reading and writing of all the GPX feature types :
In addition to its GPX attributes, each route point of a route has a
route_fid (foreign key to the FID of its belonging route) and a
route_point_id which is its sequence number in the route.
The same applies for track points with track_fid,
track_seg_id and track_seg_point_id. All coordinates are
relative to the WGS84 datum (EPSG:4326).
If the environment variable GPX_ELE_AS_25D is set to YES, the elevation element will be used to set the Z coordinates of waypoints, route points and track points.
The OGR/GPX reads and writes the GPX attributes for the waypoints, routes and tracks.
By default, up to 2 <link> elements can be taken into account by feature. This default number can be changed with the GPX_N_MAX_LINKS environment variable.
The content returned by OGR will be encoded in UTF-8, after the conversion from the encoding mentionned in the file header is.
If your GPX file is not encoded in one of the previous encodings, it
will not be parsed by the GPX driver. You may convert it into one of
the supported encoding with the iconv utility for example and
change accordingly the encoding parameter value in the XML
header.
When writing a GPX file, the driver expects UTF-8 content to be passed in.
The following sequence GPX content :
<extensions> <navaid:name>TOTAL RF</navaid:name> <navaid:address>BENSALEM</navaid:address> <navaid:state>PA</navaid:state> <navaid:country>US</navaid:country> <navaid:frequencies> <navaid:frequency type="CTAF" frequency="122.900" name="CTAF"/> </navaid:frequencies> <navaid:runways> <navaid:runway designation="H1" length="80" width="80" surface="ASPH-G"> </navaid:runway> </navaid:runways> <navaid:magvar>12</navaid:magvar> </extensions>will be interpreted in the OGR SF model as :
navaid_name (String) = TOTAL RF navaid_address (String) = BENSALEM navaid_state (String) = PA navaid_country (String) = US navaid_frequencies (String) = <navaid:frequency type="CTAF" frequency="122.900" name="CTAF" ></navaid:frequency> navaid_runways (String) = <navaid:runway designation="H1" length="80" width="80" surface="ASPH-G" ></navaid:runway> navaid_magvar (Integer) = 12
Note : the GPX driver will output content of the extensions element
only if it is found in the first records of the GPX file. If extensions
appear later, you can force an explicit parsing of the whole file with
the GPX_USE_EXTENSIONS environment variable.
If the output file already exits, the writing will not occur. You have to delete the existing file first.
Supported geometries :
For route points and tracks points, if there is a Z coordinate, it is used to fill the elevation element of the corresponding points.
Starting with GDAL/OGR 1.8.0, if a layer is named "track_points" with wkbPoint/wkbPoint25D geometries, the tracks in the GPX file will be built from the sequence of features in that layer. This is the way of setting GPX attributes for each track point, in addition to the raw coordinates. Points belonging to the same track are identified thanks to the same value of the 'track_fid' field (and it will be broken into track segments according to the value of the 'track_seg_id' field). They must be written in sequence so that track objects are properly reconstructed. The 'track_name' field can be set on the first track point to fill the <name> element of the track. Similarly, if a layer is named "route_points" with wkbPoint/wkbPoint25D geometries, the routes in the GPX file will be built from the sequence of points with the same value of the 'route_fid' field. The 'route_name' field can be set on the first track point to fill the <name> element of the route.
The GPX writer supports the following layer creation options:
The GPX writer supports the following dataset creation options:
Waypoints, routes and tracks must be written into that order to be valid against the XML Schema.
When translating from a source dataset, it may be necessary to rename the field names from the source dataset to the expected GPX attribute names, such as <name>, <desc>, etc... This can be done with a OGR VRT dataset, or by using the "-sql" option of the ogr2ogr utility.
To avoid this, starting with GDAL 1.6.1, you can define the GPX_SHORT_NAMES configuration option to TRUE to make them be reported respectively as "trksegid" and "trksegptid", which will allow them to be unique once translated to DBF. The "route_point_id" field of route_points layer will also be renamed to "rteptid". But note that no particular processing will be done for any extension field names.
To translate the track_points layer of a GPX file to a set of shapefiles :
ogr2ogr --config GPX_SHORT_NAMES YES out input.gpx track_points
Starting with GDAL 1.6.0, you can use the OGR SQL CAST operator to convert the time field to a string :
ogr2ogr out input.gpx -sql "SELECT ele, CAST(time AS character(32)) FROM waypoints"Starting with GDAL 1.7.0, there is a more convenient way to select all fields and ask for the conversion of the ones of a given type to strings:
ogr2ogr out input.gpx -fieldTypeToString DateTime
ogrinfo -ro -al input.gpx
ogr2ogr -f GPX output.gpx input.gpx waypoints routes tracks
ogr2ogr -f GPX -dsco GPX_USE_EXTENSIONS=YES output.gpx inputwhich will give an output like the following one :
<?xml version="1.0"?> <gpx version="1.1" creator="GDAL 1.5dev" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ogr="http://osgeo.org/gdal" xmlns="http://www.topografix.com/GPX/1/1" xsi:schemaLocation="http://www.topografix.com/GPX/1/1 http://www.topografix.com/GPX/1/1/gpx.xsd"> <wpt lat="1" lon="2"> <extensions> <ogr:Primary_ID>PID5</ogr:Primary_ID> <ogr:Secondary_ID>SID5</ogr:Secondary_ID> </extensions> </wpt> <wpt lat="3" lon="4"> <extensions> <ogr:Primary_ID>PID4</ogr:Primary_ID> <ogr:Secondary_ID>SID4</ogr:Secondary_ID> </extensions> </wpt> </gpx>
ogr2ogr -f GPX output.gpx input.shp -sql "SELECT field1 AS name, field2 AS desc FROM input"
This error happens when the layer to create does not expose a precise geometry type, but just a generic wkbUnknown type. This is for example the case when using ogr2ogr with a SQL request to a PostgreSQL datasource. You must then explicitely specify -nlt POINT (or LINESTRING or MULTILINESTRING).
GRASS points are represented as wkbPoint, lines and boundaries as wkbLineString and areas as wkbPolygon. wkbMulti* and wkbGeometryCollection are not used. More feature types can be mixed in one layer. If a layer contains only features of one type, it is set appropriately and can be retrieved by OGRLayer::GetLayerDefn();
If a geometry has more categories of the same layer attached, its represented as more features (one for each category).
Both 2D and 3D maps are supported.
$GISDBASE/$LOCATION_NAME/$MAPSET/vector/mymap/headwhere 'mymap' is name of a vector map. For example:
/home/cimrman/grass_data/jizerky/jara/vector/liptakov/head
1 rivers cat $GISDBASE/$LOCATION_NAME/$MAPSET/dbf/ dbfit is possible to assign name 'rivers'
1/rivers rivers cat $GISDBASE/$LOCATION_NAME/$MAPSET/dbf/ dbfthe layer 1 will be listed is layer 'rivers'.
If a layer has no attributes linked and it has only categories, OGR internal SQL engine is used to evaluate the expression. Category is an integer number attached to geometry, it is sort of ID, but it is not FID as more features in one layer can have the same category.
Evaluation is done once when the attribute filter is set.
Evaluation is done once when the spatial filter is set.
GPSTrackMaker is a program that is compatible with more than 160 GPS models. It allows you to create your own maps. It supports vector maps and images.
The OGR driver has support for reading and writing GTM 211 files (.gtm); however, in this implementation we are not supporting images and routes. Waypoints and tracks are supported.
Although GTM has support for many data, like NAD 1967, SAD 1969, and others, the output file of the OGR driver will be using WGS 1984. And the GTM driver will only read properly GTM files georeferenced as WGS 1984 (if not the case a warning will be issued).
The OGR driver supports just POINT, LINESTRING, and MULTILINESTRING.
ogrinfo -ro -al input.gtm
ogr2ogr -f "GPSTrackMaker" output.gtm input.shp -sql "SELECT field1 AS name, field2 AS color, field3 AS type FROM input"
ogr2ogr -f "GPSTrackMaker" output.gtm PG:"host=hostaddress user=username dbname=db password=mypassword" -sql "select filed1 as name, field2 as color, field3 as type, wkb_geometry from input" -nlt MULTILINESTRING
This driver reads files containg sounding data following the Hydrographic Transfer Format (HTF), which is used by the Australian Hydrographic Office (AHO).
The driver has been developed based on HTF 2.02 specification.
The file must be georeferenced in UTM WGS84 to be considered valid by the driver.
The driver returns 2 spatial layers : a layer named "polygon" and a layer name "sounding". There is also a "hidden" layer, called "metadata", that can be fetched with GetLayerByName(), and which contains a single feature, made of the header lines of the file.
Polygons are used to distinguish between differing survey categories, such that any significant changes in position/depth accuracy and/or a change in the seafloor coverage will dictate a separate bounding polygon contains polygons.
The "polygon" layer contains the following fields :
This driver implements support for access to spatial tables in IBM Informix extended with the DataBlade spatial module.
When opening a database, it's name should be specified in the form
"IDB:dbname={dbname} server={host} user={login} pass={pass} table={layertable}".
The IDB: prefix is used to mark the name as a IDB connection string.
If the geometry_columns table exists, then all listed tables and named views will be treated as OGR layers. Otherwise all regular user tables will be treated as layers.
Regular (non-spatial) tables can be accessed, and will return features with attributes, but not geometry. If the table has a "st_*" field, it will be treated as a spatial table. The type of the field is inspected to determine how to read it.
Driver supports automatic FID detection.
For more information about Informix variables read documentation of Informix Client SDK
This example shows using ogrinfo to list Informix DataBlade layers on a different host.
ogrinfo -ro IDB:'server=demo_on user=informix dbname=frames'
Some possible transformations using ogr2ogr.
ogr2ogr -f "ESRI Shapefile" shpdir ili-bsp.itf,ili-bsp.ili
ogr2ogr -f "ESRI Shapefile" shpdir RoadsExdm2ien.xml,RoadsExdm2ben.ili,RoadsExdm2ien.ilior without model:
ogr2ogr -f "ESRI Shapefile" shpdir RoadsExdm2ien.xml
ogr2ogr -f "Interlis 1" ili-bsp.itf Bodenbedeckung__BoFlaechen_Form.shp
ogr2ogr -f "Interlis 2" LandCover.xml,RoadsExdm2ben.ili RoadsExdm2ben_10.Roads.LandCover.shp
ogr2ogr -f PostgreSQL PG:dbname=warmerda av_fixpunkte_ohne_LFPNachfuehrung.itf,av.ili -lco OVERWRITE=yes ogr2ogr -f PostgreSQL PG:dbname=warmerda av_fixpunkte_mit_LFPNachfuehrung.itf,av.ili -append
When opening a database, it's name should be specified in the form "@driver=ingres,dbname=dbname[,options]" where the options can include comma seperated items like "username=*userid*", "password=*password*", "timeout=*timeout*", "tables=table1/table2". The driver and dbname values are required, while the rest are optional. If username and password are not provided an attempt is made to authenticate as the current user.
Examples:
@driver=ingres,dbname=test,userid=warmerda,password=test,tables=usa/canada @driver=ingres,dbname=mappingIf the tables list is not provided, an attempt is made to enumerate all non-system tables as layers, otherwise only the listed tables are represented as layers. This option is primarily useful when a database has a lot of tables, and scanning all their schemas would take a significant amount of time.
If an integer field exists in a table that is named "ogr_fid" it will be used as the FID, otherwise FIDs will be assigned sequentially. This can result in different FIDs being assigned to a given record/feature depending on the spatial and attribute query filters in effect at a given time.
By default, SQL statements are passed directly to the INGRES database engine. It's also possible to request the driver to handle SQL commands with OGR SQL engine, by passing "OGRSQL" string to the ExecuteSQL() method, as name of the SQL dialect.
Currently the INGRES driver supports only "old style" INGRES spatial data types such as POLYGON, POINT, LINE, LONG POLYGON, LONG LINE, etc. It is anticipated in the future a newer OGC compliant set of spatial types will also be supported.Keyhole Markup Language (KML) is an XML-based language for managing the display of 3D geospatial data. KML has been accepted as an OGC standard, and is supported in one way or another on the major GeoBrowsers. Note that KML by specification uses only a single projection, EPSG:4326. All OGR KML output will be presented in EPSG:4326. As such OGR will create layers in the correct coordinate system and transform any geometries.
At this time, only vector layers are handled by the KML driver. (there are additional scripts supplied with the GDAL project that can build other kinds of output)
KML reading is only available if GDAL/OGR is built with the Expat XML Parser, otherwise only KML writing will be supported.
Supported geometry types are Point
, Linestring
, Polygon
, MultiPoint
, MultiLineString
, MultiPolygon
and MultiGeometry
. There are
limitations, for example: the nested nature of folders in a source KML
file is lost; folder <description>
tags will not carry
through to ouput. Since GDAL 1.6.1, folders containing multiple
geometry types, like POINT and POLYGON, are supported.
Since not all features of KML are able to be represented in the Simple Features geometry model, you will not be able to generate many KML-specific attributes from within GDAL/OGR. Please try a few test files to get a sense of what is possible.
When outputting KML, the OGR KML driver will translate each OGR Layer
into a KML Folder. (you may encounter unexpected behavior if you try to
mix the geometry types of elements in a layer, e.g. LINESTRING
and POINT
data.
The KML Driver will rename some layers, or source KML folder names,
into new names it considers valid, for example 'Layer #0
',
the default name of the first unnamed Layer, becomes 'Layer__0'
.
KML is mix of formatting and feature data. The <description> tag of a Placemark will be displayed in most geobrowsers as an HTML-filled balloon. When writing KML, Layer element attributes are added as simple schema fields. This best preserves feature type information.
Limited support is available for fills, line color and other styling attributes. Please try a few sample files to get a better sense of actual behavior.
The content returned by OGR will be encoded in UTF-8, after the conversion from the encoding mentionned in the file header is.
If your KML file is not encoded in one of the previous encodings, it
will not be parsed by the KML driver. You may convert it into one of
the supported encoding with the iconv utility for example and
change accordingly the encoding parameter value in the XML
header.
When writing a KML file, the driver expects UTF-8 content to be passed in.
The following creation options are supported:
ogr2ogr -f KML output.kml input.shp -dsco NameField=RegionName
ogr2ogr -f KML output.kml input.shp -dsco AltitudeMode=absolute
ogr2ogr -f KML output.kml PG:'host=myserver dbname=warmerda' -sql "SELECT pop_1994 from canada where province_name = 'Alberta'"
How to dump contents of .kml file as OGR sees it:
ogrinfo -ro somedisplay.kml
Google Earth seems to have some limits regarding the number of coordinates in complex geometries like polygons. If the problem appears, then problematic geometries are displayed completely or partially covered by vertical stripes. Unfortunately, there are no exact number given in the KML specification about this limitation, so the KML driver will not warn about potential problems. One of possible and tested solutions is to simplify a line or a polygon to remove some coordinates. Here is the whole discussion about this issue on the Google KML Developer Forum, in the polygon displays with vertical stripes thread.
The LIBKML driver is a client of Libkml from Google, a reference implementation of KML reading and writing, in the form of a cross platform C++ library. You must build and install Libkml in order to use this OGR driver.
Note that if you build and include this LIBKML driver, it will become the default reader of KML for ogr, overriding the previous KML driver. You can still specify either KML or LIBKML as the ouput driver via the command line
Libkml from Google provides reading services for any valid KML file. However, please be advised that some KML facilities do not map into the Simple Features specification ogr uses as its internal structure. Therefore, a best effort will be made by the driver to understand the content of a KML file read by libkml into ogr, but your mileage may vary. Please try a few KML files as samples to get a sense of what is understood. In particular, nesting of feature sets more than one deep will be flattened to support ogr's internal format.
You may specify a
datasource as a kml file somefile.kml
, a directory
somedir/
, or a kmz file somefile.kmz
.
By default on directory and kmz datasources, an index file of all the layers will be read from or written to doc.kml. It conatains a <NetworkLink> to each layer file in the datasource. This feature can be turned off by setting the enviroment variable LIBKML_USE_DOC.KML to "no"
Datasource style tables are written to the <Document> in a .kml, style/style.kml in a kmz file, or style.kml in a directory, as one or more <Style> elements. Not all of OGR Feature Style can translate into KML.
Layers are mapped to kml files as a <Document> or <Folder>, and in kmz files or directorys as a seperate kml file.
Layer style tables can not be read from or written to a kml layer that is a <Folder>, otherwise they are in the <Document> that is the layer.
Read and write of <Schema> is supported for .kml files , .kmz files, and directorys.
An OGR feature translates to kml as a <Placemark>.
Style Strings at the feature level are Mapped to KML as either a <Style> or <StyleUrl> in each <Placemark>.
When reading a kml feature and the enviroment variable LIBKML_RESOLVE_STYLE is set to yes, styleurls are looked up in the style tables and the features style string is set to the style from the table. This is to allow reading of shared styles by applications, like mapserver, that do not read style tables.
When reading a kml feature and the enviroment variable LIBKML_EXTERNAL_STYLE is set to yes, a styleurl that is external to the datasource is read from disk or fetched from the server and parsed into the datasource style table. If the style kml can not be read or LIBKML_EXTERNAL_STYLE is set to no then the styleurl is copyed to the style string.
OGR fields (feature atributes) are mapped to kml with <Schema> and <SimpleData>, except for some special fields as noted below.
A rich set of environment variables are available to define how fields in input and output, map to a KML <Placemark>. For example, if you want a field called 'Cities' to map to the <name>; tag in KML, you can set an environment variable.
Translation of OGR Geometry to KML Geometry is pretty strait forwards with only a couple of exceptions. Point to <Point>, LineString to <LineString>, LinearRing to <LinearRing>, and Polygon to <Polygon>. In OGR a polygon contains an array of LinearRings, the first one being the outer ring. KML has the tags <outerBoundaryIs> and <innerBoundaryIs> to differentiate between the two. OGR has several Multi types of geometry : GeometryCollection, MultiPolygon, MultiPoint, and MultiLineString. When possible, OGR will try to map <MultiGeometry> to the more precise OGR geometry type (MultiPoint, MultiLineString or MultiPolygon), and default to GeometryCollection in case of mixed content.
Sometimes kml geometry will span the dateline, In applications like qgis or mapserver this will create horizontal lines all the way around the globe. Setting the enviroment variable LIBKML_WRAPDATELINE to "yes" will cause the libkml driver to split the geometry at the dateline when read.
The following bash script will build a csv file and a vrt file, and then translate them to KML using ogr2ogr into a .kml file with timestamps and styling.
#!/bin/bash # Copyright (c) 2010, Brian Case # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the "Software"), # to deal in the Software without restriction, including without limitation # the rights to use, copy, modify, merge, publish, distribute, sublicense, # and/or sell copies of the Software, and to permit persons to whom the # Software is furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL # THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER # DEALINGS IN THE SOFTWARE. icon="http://maps.google.com/mapfiles/kml/shapes/shaded_dot.png" rgba33="#FF9900" rgba70="#FFFF00" rgba150="#00FF00" rgba300="#0000FF" rgba500="#9900FF" rgba800="#FF0000" function docsv { IFS=',' while read Date Time Lat Lon Mag Dep do ts=$(echo $Date | sed 's:/:-:g')T${Time%%.*}Z rgba="" if [[ $rgba == "" ]] && [[ $Dep -lt 33 ]] then rgba=$rgba33 fi if [[ $rgba == "" ]] && [[ $Dep -lt 70 ]] then rgba=$rgba70 fi if [[ $rgba == "" ]] && [[ $Dep -lt 150 ]] then rgba=$rgba150 fi if [[ $rgba == "" ]] && [[ $Dep -lt 300 ]] then rgba=$rgba300 fi if [[ $rgba == "" ]] && [[ $Dep -lt 500 ]] then rgba=$rgba500 fi if [[ $rgba == "" ]] then rgba=$rgba800 fi style="\"SYMBOL(s:$Mag,id:\"\"$icon\"\",c:$rgba)\"" echo $Date,$Time,$Lat,$Lon,$Mag,$Dep,$ts,"$style" done } wget http://neic.usgs.gov/neis/gis/qed.asc -O /dev/stdout |\ tail -n +2 > qed.asc echo Date,TimeUTC,Latitude,Longitude,Magnitude,Depth,timestamp,OGR_STYLE > qed.csv docsv < qed.asc >> qed.csv cat > qed.vrt << EOF <OGRVRTDataSource> <OGRVRTLayer name="qed"> <SrcDataSource>qed.csv</SrcDataSource> <GeometryType>wkbPoint</GeometryType> <LayerSRS>WGS84</LayerSRS> <GeometryField encoding="PointFromColumns" x="Longitude" y="Latitude"/> </OGRVRTLayer> </OGRVRTDataSource> EOF ogr2ogr -f libkml qed.kml qed.vrt
GDAL/OGR >= 1.9.0
OGR optionally supports reading access .mdb files by using the Java Jackcess library.
This driver is primarly meant as being used on Unix platforms to overcome the issues often met with the MDBTools library that acts as the ODBC driver for MDB databases.
The driver can detect ESRI Personal Geodatabases and Geomedia MDB databases, and will deal them exactly as the PGeo and Geomedia drivers do. For other MDB databases, all the tables will be presented as OGR layers.
./configure --with-java=yes --with-mdb=yesIt is possible to add the --with-jvm-lib-add-rpath option to embed the path to the libjvm.so in the GDAL library.
On others Linux flavours, you may need to specify :
./configure --with-java=/path/to/jdk/root/path \ --with-jvm-lib=/path/to/libjvm/directory \ --with-mdb=yeswhere /path/to/libjvm/directory is for example /usr/lib/jvm/java-6-openjdk/jre/lib/amd64
There is no way to open an existing Memory datastore. It must be created with CreateDataSource() and populated and used from that handle. When the datastore is closed all contents are freed and destroyed.
The driver does not implement spatial or attribute indexing, so spatial and attribute queries are still evaluated against all features. Fetching features by feature id should be very fast (just an array lookup and feature copy).
Feature ids passed to CreateFeature() are preserved unless they exceed 10000000 in which case they will be reset to avoid a requirement for an excessively large and sparse feature array.
New fields can be added to layer that already has features.
MapInfo datasets in native (TAB) format and in interchange (MIF/MID) format are supported for reading and writing. Update of existing files is not currently supported.
Note: In the rest of this document "MIF/MID File" is used to refer to a pair of .MIF + .MID files, and "TAB file" refers to the set of files for a MapInfo table in binary form (usually with extensions .TAB, .DAT, .MAP, .ID, .IND).
The MapInfo driver treats a whole directory of files as a dataset, and a single file within that directory as a layer. In this case the directory name should be used as the dataset name.
However, it is also possible to use one of the files (.tab or .mif) in a MapInfo set as the dataset name, and then it will be treated as a dataset with one single layer.
MapInfo coordinate system information is supported for reading and writing.
The TAB File format requires that the bounds (geographical extents) of a new file be set before writing the first feature. However, there is currently no clean mechanism to set the default bounds of a new file through the OGRDataSource interface.
We should fix the driver at some point to set valid default bounds for each projection, but for the time being, the MapInfo driver sets the following default bounds when a new layer is created:
If no coordinate system is provided when creating a layer, the projection case is used, not geographic which can result in very low precision if the coordinates really are geographic. You can add "-a_srs WGS84" to the ogr2ogr commandline during a translation to force geographic mode.
MapInfo feature attributes suffer a number of limitations:
Before v1.8.0 , the driver was incorrectly using a "." as the delimiter for id: parameters and starting with v1.8.0 the driver uses a comma as the delimiter was per the OGR Feature Style Specification.
This driver implements support for access to spatial tables in Microsoft SQL Server 2008+ which contains the geometry and geography data types to repesent the geometry columns.
MSSQL:server=.\MSSQLSERVER2008;database=dbname;trusted_connection=yesIn addition to the standard parameters of the ODBC driver connection string format the following custom parameters can also be used in the following syntax:
The parameter names are not case sensitive in the connection strings.
Specifying the Database parameter is required by the driver in order to select the proper database.
The MS SQL Spatial driver passes SQL statements directly to MS SQL by default, rather than evaluating them internally when using the ExecuteSQL() call on the OGRDataSource, or the -sql command option to ogr2ogr. Attribute query expressions are also passed directly through to MSSQL. It's also possible to request the OGR MSSQL driver to handle SQL commands with the OGR SQL engine, by passing "OGRSQL" string to the ExecuteSQL() method, as the name of the SQL dialect.
The MSSQL driver in OGR supports the OGRLayer::StartTrasaction(), OGRLayer::CommitTransaction() and OGRLayer::RollbackTransaction() calls in the normal SQL sense.
This driver doesn't support creating new databases, you might want to use the Microsoft SQL Server Client Tools for this purpose, but it does allow creation of new layers within an existing database.
By default the MS SQL Spatial driver doesn't add spatial indexes to the tables during the layer creation. However you should create a spatial index by using the following sql option:
create spatial index on schema.table
The spatial index can also be dropped by using the following syntax:
drop spatial index on schema.table
Creating a layer from an OGR data source
ogr2ogr -overwrite -f MSSQLSpatial "MSSQL:server=.\MSSQLSERVER2008;database=geodb; trusted_connection=yes" "rivers.tab"
Connecting to a layer and dump the contents
ogrinfo -al "MSSQL:server=.\MSSQLSERVER2008;database=geodb;tables=rivers; trusted_connection=yes"
Creating a spatial index
ogrinfo -sql "create spatial index on rivers" "MSSQL:server=.\MSSQLSERVER2008; database=geodb;trusted_connection=yes"
When opening a database, it's name should be specified in the form "MYSQL:dbname[,options]" where the options can include comma seperated items like "user=*userid*", "password=*password*", "host=*host*" and "port=*port*".
As well, a "tables=*table*;*table*..." option can be added to restrict access to a specific list of tables in the database. This option is primarily useful when a database has a lot of tables, and scanning all their schemas would take a significant amount of time.
Currently all regular user tables are assumed to be layers from an OGR point of view, with the table names as the layer names. Named views are not currently supported.
If a single integer field is a primary key, it will be used as the FID otherwise the FID will be assigned sequentially, and fetches by FID will be extremely slow.
By default, SQL statements are passed directly to the MySQL database engine. It's also possible to request the driver to handle SQL commands with OGR SQL engine, by passing "OGRSQL" string to the ExecuteSQL() method, as name of the SQL dialect.
By default, the MySQL driver will attempt to preserve the precision of OGR features when creating and reading MySQL layers. For integer fields with a specified width, it will use DECIMAL as the MySQL field type with a specified precision of 0. For real fields, it will use DOUBLE with the specified width and precision. For string fields with a specified width, VARCHAR will be used.
The MySQL driver makes no allowances for character encodings at this time.
The MySQL driver is not transactional at this time.
MYSQL:westholland,user=root,password=psv9570,port=3306,tables=bedrijvenThe following example uses ogr2ogr to create copy the world_borders layer from a shapefile into a MySQL table. It overwrites a table with the existing name borders2, sets a layer creation option to specify the geometry column name to SHAPE2.
ogr2ogr -f MySQL MySQL:test,user=root world_borders.shp -nln borders2 -update -overwrite -lco GEOMETRY_NAME=SHAPE2The following example uses ogrinfo to return some summary information about the borders2 layer in the test database.
ogrinfo MySQL:test,user=root borders2 -so Layer name: borders2 Geometry: Polygon Feature Count: 3784 Extent: (-180.000000, -90.000000) - (180.000000, 83.623596) Layer SRS WKT: GEOGCS["GCS_WGS_1984", DATUM["WGS_1984", SPHEROID["WGS_84",6378137,298.257223563]], PRIMEM["Greenwich",0], UNIT["Degree",0.017453292519943295]] FID Column = OGR_FID Geometry Column = SHAPE2 cat: Real (0.0) fips_cntry: String (80.0)
This driver depends on GDAL/OGR being built with the Xerces XML parsing library.
This driver was implemented within the context of the PostNAS project which has more information on it's use.
This driver treats a directory as a dataset and attempts to merge all the .NTF files in the directory, producing a layer for each type of feature (but generally not for each source file). Thus a directory containing several Landline files will have three layers (LANDLINE_POINT, LANDLINE_LINE and LANDLINE_NAME) regardless of the number of landline files.
NTF features are always returned with the British National Grid coordinate system. This may be inappropriate for NTF files written by organizations other than the UK Ordnance Survey.
Landline (and Landline Plus): LANDLINE_POINT LANDLINE_LINE LANDLINE_NAME Panorama Contours: PANORAMA_POINT PANORAMA_CONTOUR HEIGHT attribute holds elevation. Strategi: STRATEGI_POINT STRATEGI_LINE STRATEGI_TEXT STRATEGI_NODE Meridian: MERIDIAN_POINT MERIDIAN_LINE MERIDIAN_TEXT MERIDIAN_NODE Boundaryline: BOUNDARYLINE_LINK BOUNDARYLINE_POLY BOUNDARYLINE_COLLECTIONS The _POLY layer has links to links allowing true polygons to be formed (otherwise the _POLY's only have a seed point for geometry. The collections are collections of polygons (also without geometry as read). This is the only product from which polygons can be constructed. BaseData.GB: BASEDATA_POINT BASEDATA_LINE BASEDATA_TEXT BASEDATA_NODE OSCAR Asset/Traffic: OSCAR_POINT OSCAR_LINE OSCAR_NODE OSCAR Network: OSCAR_NETWORK_POINT OSCAR_NETWORK_LINE OSCAR_NETWORK_NODE Address Point: ADDRESS_POINT Code Point: CODE_POINT Code Point Plus: CODE_POINT_PLUSThe dataset as a whole will also have a FEATURE_CLASSES layer containing a pure table relating FEAT_CODE numbers with feature class names (FC_NAME). This applies to all products in the dataset. A few layer types (such as the Code Point, and Address Point products) don't include feature classes. Some products use features classes that are not defined in the file, and so they will not appear in the FEATURE_CLASSES layer.
The layers are based on a low level feature type in the NTF file, but will generally contain features of many different feature codes (FEAT_CODE attribute). Different features within a given layer may have a variety of attributes in the file; however, the schema is established based on the union of all attributes possible within features of a particular type (ie. POINT) of that product family (ie. OSCAR Network).
If an NTF product is read that doesn't match one of the known schema's it will go through a different generic handler which has only layers of type GENERIC_POINT and GENERIC_LINE. The features only have a FEAT_CODE attribute.
Details of what layers of what products have what attributes can be found in the NTFFileReader::EstablishLayers() method at the end of ntf_estlayers.cpp. This file also contains all the product specific translation code.
FEAT_CODE: General feature code integer, can be used to lookup a name in the FEATURE_CLASSES layer/table. TEXT_ID/POINT_ID/LINE_ID/NAME_ID/COLL_ID/POLY_ID/GEOM_ID: Unique identifier for a feature of the appropriate type. TILE_REF: All layers (except FEATURE_CLASSES) contain a TILE_REF attribute which indicates which tile (file) the features came from. Generally speaking the id numbers are only unique within the tile and so the TILE_REF can be used restrict id links within features from the same file. FONT/TEXT_HT/DIG_POSTN/ORIENT: Detailed information on the font, text height, digitizing position, and orientation of text or name objects. Review the OS product manuals to understand the units, and meaning of these codes. GEOM_ID_OF_POINT: For _NODE features this defines the POINT_ID of the point layer object to which this node corresponds. Generally speaking the nodes don't carry a geometry of their own. The node must be related to a point to establish it's position. GEOM_ID_OF_LINK: A _list_ of _LINK or _LINE features to end/start at a node. Nodes, and this field are generally only of value when establishing connectivity of line features for network analysis. Note that this should be related to the target features GEOM_ID, not it's LINE_ID. On the BOUNDARYLINE_POLY layer this attribute contains the GEOM_IDs of the lines which form the edge of the polygon. POLY_ID: A list of POLY_ID's from the BOUNDARYLINE_POLY layer associated with a given collection in the BOUNDARYLINE_COLLECTIONS layer.
GENERIC_POINT GENERIC_LINE GENERIC_NAME GENERIC_TEXT GENERIC_POLY GENERIC_NODE GENERIC_COLLECTIONGeneric products are primarily handled by the ntf_generic.cpp module whereas specific products are handled in ntf_estlayers.cpp.
Because some data products (OSNI datasets) not from the Ordnance Survey were found to have record groups in unusual orders compared to what the UK Ordnance Survey does, it was found necessary to cache all the records of level 3 and higher generic products, and construct record groups by id reference from within this cache rather than depending on convenient record orderings. This is accomplished by the NTFFileReader "indexing" capability near the bottom of ntffilereader.cpp. Because of this in memory indexing accessing generic datasets can be much more memory intensive than accessing known data products, though it isn't necessary for generic level 1 and 2 products.
It is possible to force a known product to be treated as generic by setting the FORCE_GENERIC option to "ON" using OGRNTFDataSource::SetOptionsList() as is demonstrated in ntfdump.cpp. This may also be accomplished from outside OGR applications by setting the OGR_NTF_OPTIONS environment variable to "FORCE_GENERIC=ON".
When opening a database, it's name should be specified in the form "OCI:userid/password@database_instance:table,table". The list of tables is optional. The database_instance portion may be omitted when accessing the default local database instance.
If the list of tables is not provided, then all tables appearing in ALL_SDO_GEOM_METADATA will be treated by OGR as layers with the table names as the layer names. Non-spatial tables or spatial tables not listed in the ALL_SDO_GEOM_METADATA table are not accessable unless explicitly listed in the datasource name. Even in databases where all desired layers are in the ALL_SDO_GEOM_METADATA table, it may be desirable to list only the tables to be used as this can substantially reduce initialization time in databases with many tables.
If the table has an integer column called OGR_FID it will be used as the feature id by OGR (and it will not appear as a regular attribute). When loading data into Oracle Spatial OGR will always create the OGR_FID field.
By default, the Oracle driver passes SQL statements directly to Oracle rather than evaluating them internally when using the ExecuteSQL() call on the OGRDataSource, or the -sql command option to ogr2ogr. Attribute query expressions are also passed through to Oracle.
As well two special commands are supported via the ExecuteSQL() interface. These are "DELLAYER:<table_name>" to delete a layer, and "VALLAYER:<table_name>" to apply the SDO_GEOM.VALIDATE_GEOMETRY() check to a layer. Internally these pseudo-commands are translated into more complex SQL commands for Oracle.
It's also possible to request the driver to handle SQL commands with OGR SQL engine, by passing "OGRSQL" string to the ExecuteSQL() method, as name of the SQL dialect.
Upon closing the OGRDataSource newly created layers will have a spatial index automatically built. At this point the USER_SDO_GEOM_METADATA table will also be updated with bounds for the table based on the features that have actually been written. One concequence of this is that once a layer has been loaded it is generally not possible to load additional features outside the original extents without manually modifying the DIMINFO information in USER_SDO_GEOM_METADATA and rebuilding the spatial index.
% ogr2ogr -f OCI OCI:warmerda/password@gdal800.dreadfest.com abc.shpThis second example loads a political boundaries layer from VPF (via the OGDI driver), and renames the layer from the cryptic OGDI layer name to something more sensible. If an existing table of the desired name exists it is overwritten.
% ogr2ogr -f OCI OCI:warmerda/password \ gltp:/vrf/usr4/mpp1/v0eur/vmaplv0/eurnasia \ -lco OVERWRITE=yes -nln polbndl_bnd 'polbndl@bnd(*)_line'This example shows using ogrinfo to evaluate an SQL query statement within Oracle. More sophisticated Oracle Spatial specific queries may also be used via the -sql commandline switch to ogrinfo.
ogrinfo -ro OCI:warmerda/password -sql "SELECT pop_1994 from canada where province_name = 'Alberta'"
ODBC datasources are accessed using a datasource name of the form ODBC:userid/password@dsn, schema.tablename(geometrycolname),...: srs_tablename(sridcolumn,srtextcolumn). With optional items dropped the following are also acceptable:
By default the ODBC searches for GEOMETRY_COLUMNS table. If found it is used to identify the set of spatial tables that should be treated as layers by OGR. If not found, then all tables in the datasource are returned as non-spatial layers. However, if a table list (a list of comma seperated table names) is provided, then only those tables will be represented as layers (non-spatial). Fetching the full definition of all tables in a complicated database can be quite timeconsuming, so the ability to restrict the set of tables accessed is primarily a performance issue.
If the GEOMETRY_COLUMNS table is found, it is used to select a column to be the geometry source. If the tables are passed in the datasource name, then the geometry column associated with a table can be included in round brackets after the tablename. It is currently a hardcoded assumption that the geometry is in Well Known Binary (WKB) format if the field is binary, or Well Known Text (WKT) otherwise. The GEOMETRY_COLUMNS table should have at least the columns F_TABLE_NAME, F_GEOMETRY_COLUMN and GEOMETRY_TYPE.
If the table has a geometry column, and has fields called XMIN, YMIN, XMAX and YMAX then direct table queries with a spatial filter accelerate the spatial query. The XMIN, YMIN, XMAX and YMAX fields should represent the extent of the geometry in the row in the tables coordinate system.
By default, SQL statements are passed directly to the underlying database engine. It's also possible to request the driver to handle SQL commands with the OGR SQL engine, by passing "OGRSQL" string to the ExecuteSQL() method, as name of the SQL dialect.
OGDI can (among other formats) read VPF products, such as DCW and VMAP.
If an OGDI gltp url is opened directly the OGDI 3.1 capabilities for the driver/server are queried to get a list of layers. One OGR layer is created for each OGDI family available for each layer in the datastore. For drivers such as VRF this can result in alot of layers. Each of the layers has an OGR name based on the OGDI name plus an underscore and the family name. For instance a layer might be called watrcrsl@hydro(*)_line if coming out of the VRF driver.
From GDAL/OGR 1.8.0, setting the OGR_OGDI_LAUNDER_LAYER_NAMES configuration option (or environment variable) to YES causes the layer names to be simplified. For example : watrcrsl_hydro instead of 'watrcrsl@hydro(*)_line'
Alternatively to accessing all the layers in a datastore, it is possible to open a particular layer using a customized filename consisting of the regular GLTP URL to which you append the layer name and family type (separated by colons). This mechanism must be used to access layers of pre OGDI 3.1 drivers as before OGDI 3.1 there was no regular way to discover available layers in OGDI.
gltp:[//<hostname>]/<driver_name>/<dataset_name>:<layer_name>:<family>Where <layer_name> is the OGDI Layer name, and <family> is one of: "line", "area", "point", or "text".
OGDI coordinate system information is supported for most coordinate systems. A warning will be produced when a layer is opened if the coordinate system cannot be translated.
There is no update or creation support in the OGDI driver.
Raster layers cannot be accessed with this driver but can be accessed using the GDAL OGDI Raster driver.
ogrinfo gltp:/vrf/usr4/mpp1/v0eur/vmaplv0/eurnasia 'watrcrsl@hydro(*)_line'
In the dataset name 'gltp:/vrf/usr4/mpp1/v0eur/vmaplv0/eurnasia' the gltp:/vrf part is not really in the filesystem, but has to be added. The VPF data was at /usr4/mpp1/v0eur/. The 'eurnasia' directory should be at the same level as the dht. and lat. files. The 'hydro' reference is a subdirectory of 'eurnasia/' where watrcrsl.* is found.
Usage examples VMAP0 to SHAPE conversion with 'ogr2ogr':
ogr2ogr watrcrsl.shp gltp:/vrf/usr4/mpp1/v0eur/vmaplv0/eurnasia 'watrcrsl@hydro(*)_line' ogr2ogr polbnda.shp gltp:/vrf/usr4/mpp1/v0eur/vmaplv0/eurnasia 'polbnda@bnd(*)_area'An OGR SQL query against a VMAP dataset. Again, note the careful ing of the layer name.
ogrinfo -ro gltp:/vrf/usr4/mpp1/v0noa/vmaplv0/noamer \ -sql 'select * from "polbndl@bnd(*)_line" where use=26'
This driver reads files describing Special Use Airspaces in the OpenAir format
Airspace are returned as features of a single layer called 'airspaces', with a geometry of type Polygon and the following fields : CLASS, NAME, FLOOR, CEILING.
Airspace geometries made of arcs will be tesselated. Styling information when present is returned at the feature level.
An extra layer called 'labels' will contain a feature for each label (AT element). There can be multiple AT records for a single airspace segment. The fields are the same as the 'airspaces' layer.
This driver reads TABLE objects from PDS datasets. Note there is a GDAL PDS driver to read the raster IMAGE objects from PDS datasets.
The driver must be provided with the product label file (even when the actual data is placed in a separate file).
If the label file contains a TABLE object, it will be read as the only layer of the dataset. If no TABLE object is found, the driver will look for all objects containing the TABLE string and read each one in a layer.
ASCII and BINARY tables are supported. The driver can retrieve the field descriptions from inline COLUMN objects or from a separate file pointed by ^STRUCTURE.
If the table has a LONGITUDE and LATITUDE columns of type REAL and with UNIT=DEGREE, they will be used to return POINT geometries.
This driver requires a connection to a Postgres database. If you want to prepare a SQL dump to inject it later into a Postgres database, you can instead use the PostgreSQL SQL Dump driver (GDAL/OGR >= 1.8.0)
You can find additionnal information on the driver in the Advanced OGR PostgreSQL driver Information page.
PG:dbname=databasenameor
PG:"dbname='databasename' host='addr' port='5432' user='x' password='y'"It's also possible to omit the database name and connect to a default database, with the same name as the user name.
Starting with GDAL 1.7.0, the driver also supports the geography column type introduced in PostGIS 1.5.
The PostgreSQL driver passes SQL statements directly to PostgreSQL by default, rather than evaluating them internally when using the ExecuteSQL() call on the OGRDataSource, or the -sql command option to ogr2ogr. Attribute query expressions are also passed directly through to PostgreSQL. It's also possible to request the ogr Pg driver to handle SQL commands with the OGR SQL engine, by passing "OGRSQL" string to the ExecuteSQL() method, as the name of the SQL dialect.
The PostgreSQL driver in OGR supports the OGRDataSource::StartTrasaction(), OGRDataSource::CommitTransaction() and OGRDataSource::RollbackTransaction() calls in the normal SQL sense.
As mentioned above the type system is impoverished, and many OGR types are not appropriately mapped into PostgreSQL.
If the database has PostGIS types loaded (ie. the geometry type) newly created layers will be created with the PostGIS Geometry type. Otherwise they will use OID.
By default it is assumed that text being sent to Postgres is in the UTF-8 encoding. This is fine for plain ASCII, but can result in errors for extended characters (ASCII 155+, LATIN1, etc). While OGR provides no direct control over this, you can set the PGCLIENTENCODING environment variable to indicate the format being provided. For instance, if your text is LATIN1 you could set the environment variable to LATIN1 before using OGR and input would be assumed to be LATIN1 instead of UTF-8. An alternate way of setting the client encoding is to issue the following SQL command with ExecuteSQL() : "SET client_encoding TO encoding_name" where encoding_name is LATIN1, etc. Errors can be catched by enclosing this command with a CPLPushErrorHandler()/CPLPopErrorHandler() pair.
% ogr2ogr -f PostgreSQL PG:dbname=warmerda abc.shp
This second example loads a political boundaries layer from VPF (via the OGDI driver), and renames the layer from the cryptic OGDI layer name to something more sensible. If an existing table of the desired name exists it is overwritten.
% ogr2ogr -f PostgreSQL PG:dbname=warmerda \ gltp:/vrf/usr4/mpp1/v0eur/vmaplv0/eurnasia \ -lco OVERWRITE=yes -nln polbndl_bnd 'polbndl@bnd(*)_line'
In this example we merge tiger line data from two different directories of tiger files into one table. Note that the second invocation uses -append and no OVERWRITE=yes.
% ogr2ogr -f PostgreSQL PG:dbname=warmerda tiger_michigan \ -lco OVERWRITE=yes CompleteChain % ogr2ogr -update -append -f PostgreSQL PG:dbname=warmerda tiger_ohio \ CompleteChain
This example shows using ogrinfo to evaluate an SQL query statement within PostgreSQL. More sophisticated PostGIS specific queries may also be used via the -sql commandline switch to ogrinfo.
ogrinfo -ro PG:dbname=warmerda -sql "SELECT pop_1994 from canada where province_name = 'Alberta'"
This example shows using ogrinfo to list PostgreSQL/PostGIS layers on a different host.
ogrinfo -ro PG:'host=myserver.velocet.ca user=postgres dbname=warmerda'
This write-only driver implements support for generating a SQL dump file that can later be injected into a live PostgreSQL instance. It supports PostgreSQL extended with the PostGIS geometries.
This driver is very similar to the PostGIS shp2pgsql utility.
Most creation options are shared with the regular PostgreSQL driver.
The driver opens the output file through the VSIF Large API, so it is possible to specify /vsistdout/ as output file to output on the standard output.
% ogr2ogr --config PG_USE_COPY YES -f PGDump abc.sql abc.shp -lco SRID=32631
% ogr2ogr --config PG_USE_COPY YES -f PGDump /vsistdout/ abc.shp | psql -d my_dbname -f -
OGR optionally supports reading ESRI Personal GeoDatabase .mdb files via ODBC. Personal GeoDatabase is a Microsoft Access database with a set of tables defined by ESRI for holding geodatabase metadata, and with geometry for features held in a BLOB column in a custom format (essentially Shapefile geometry fragments). This drivers accesses the personal geodatabase via ODBC but does not depend on any ESRI middle-ware.
Personal Geodatabases are accessed by passing the file name of the .mdb file to be accessed as the data source name. On Windows, no ODBC DSN is required. On Linux, there are problems with DSN-less connection due to incomplete or buggy implementation of this feature in the MDB Tools package, So, it is required to configure Data Source Name (DSN) if the MDB Tools driver is used (check instructions below).
In order to facilitate compatibility with different configurations, the PGEO_DRIVER_TEMPLATE Config Option was added to provide a way to programmatically set the DSN programmatically with the filename as an argument. In cases where the driver name is known, this allows for the construction of the DSN based on that information in a manner similar to the default (used for Windows access to the Microsoft Access Driver).
OGR treats all feature tables as layers. Most geometry types should be supported, including 3D data. Measures information will be discarded. Coordinate system information should be properly associated with layers.
Currently the OGR Personal Geodatabase driver does not take advantage of spatial indexes for fast spatial queries, though that may be added in the future.
By default, SQL statements are passed directly to the MDB database engine. It's also possible to request the driver to handle SQL commands with OGR SQL engine, by passing "OGRSQL" string to the ExecuteSQL() method, as name of the SQL dialect.
Starting with GDAL/OGR 1.9.0, the MDB driver is an alternate way of reading ESRI Personal GeoDatabase .mdb files without requiring unixODBC and MDB Tools
This article gives step-by-step explanation of how to use OGR with unixODBC package and how to access Personal Geodatabase with PGeo driver. See also GDAL wiki for other details
(On Ubuntu 8.04 : sudo apt-get install unixodbc libmdbodbc)
There are two configuration files for unixODBC:
Format of configuration files is very simple:
[section_name] entry1 = value entry2 = value
For more details, refer to unixODBC manual.
First, you need to configure ODBC driver to access Microsoft Access databases with MDB Tools. Add following definition to your odbcinst.ini file.
[Microsoft Access Driver (*.mdb)] Description = MDB Tools ODBC drivers Driver = /usr/lib/libmdbodbc.so.0 Setup = FileUsage = 1 CPTimeout = CPReuse =
In this section, I use 'sample.mdb' as a name of Personal Geodatabase, so replace this name with your own database.
Create .odbc.ini file in your HOME directory:
$ touch ~/.odbc.ini
Put following ODBC data source definition to your .odbc.ini file:
[sample_pgeo] Description = Sample PGeo Database Driver = Microsoft Access Driver (*.mdb) Database = /home/mloskot/data/sample.mdb Host = localhost Port = 1360 User = mloskot Password = Trace = Yes TraceFile = /home/mloskot/odbc.log
Step by step explanation of DSN entry:
Now, you can try to access PGeo data source with ogrinfo.
First, check if you have PGeo driver built in OGR:
$ ogrinfo --formats Supported Formats: ESRI Shapefile ... PGeo ...
Now, you can access your Personal Geodatabase. As a data source use PGeo:<DSN> where <DSN> is a name of DSN entry you put to your .odbc.ini.
ogrinfo PGeo:sample_pgeo INFO: Open of `PGeo:sample_pgeo' using driver `PGeo' successful. 1. ...After you run the command above, you should get list of layers stored in your geodatabase.
Now, you can try to query details of particular layer:
ogrinfo PGeo:sample_pgeo <layer name> INFO: Open of `PGeo:sample_pgeo' using driver `PGeo' successful. Layer name: ...
The S-57 driver module produces features for all S-57 features in S-57 file, and associated updates. S-57 (ENC) files normally have the extension ".000".
S-57 feature objects are translated into features. S-57 geometry objects are automatically collected and formed into geometries on the features.
The S-57 reader depends on having two supporting files, s57objectclasses.csv, and s57attributes.csv available at runtime in order to translate features in an object class specific manner. These should be in the directory pointed to by the environment variable S57_CSV, or in the current working directory.
S-57 update files contain information on how to update a distributed S-57 base data file. The base files normally have the extension .000 while the update files have extensions like .001, .002 and so on. The S-57 reader will normally read and apply all updates files to the in memory version of the base file on the fly. The feature data provided to the application therefore includes all the updates.
Each feature type has a predefined set of attributes as defined by the S-57 standard. For instance, the airport (AIRARE) object class can have the AIRARE, CATAIR, CONDTN, CONVIS, NOBJNM, OBJNAM, STATUS, INFORM, NINFOM, NTXTDS, PICREP, SCAMAX, SCAMIN, TXTDSC, ,RECDAT, RECIND, SORDAT, and SORIND attributes. These short names can be related to longer, more meaningful names using an S-57 object/attribute catalog such as the S-57 standard document itself, or the catalog files (s57attributes.csv, and s57objectclasses.csv). Such a catalog can also be used to establish all the available object classes, and their attributes.
The following are some common attributes, including generic attributes which appear on all feature, regardless of object class. is turned on.
Attribute Name Description Defined On -------------- ----------- ---------- GRUP Group number. All features OBJL Object label code. This number All features indicates the object class of the feature. RVER Record version. AGEN Numeric agency code, such as 50 for All features the Canadian Hydrographic Service. A potentially outdated list is available in agencode.txt. FIDN Feature identification number. All features FIDS Feature identification subdivision. All features DSNM Dataset name. The name of the file All features the feature came from. Used with LNAM to form a unique dataset wide identifier for a feature. INFORM Informational text. Some features NINFOM Informational text in national Some features language. OBJNAM Object name Some features NOBJNM Object name in national Some features language. SCAMAX Maximum scale for display Some features SCAMIN Minimum scale for display Some features SORDAT Source date Some featuresThe following are present if LNAM_REFS is enabled:
LNAM Long name. An encoding of AGEN, All features FIDN and FIDS used to uniquely identify this features within an S-57 file. LNAM_REFS List of long names of related features All Features FFPT_RIND Relationship indicators for each of All Features the LNAM_REFS relationships.
set OGR_S57_OPTIONS = "RETURN_PRIMITIVES=ON,RETURN_LINKAGES=ON,LNAM_REFS=ON"
set OGR_S57_OPTIONS = "RETURN_PRIMITIVES=ON,RETURN_LINKAGES=ON,LNAM_REFS=ON"
ArcSDE instances are accessed with a datasource name of the following form. The server, instance, username and password fields are required. The instance is the port number of the SDE server, which generally defaults to 5151. If the layer parameter is specified then the SDE driver is able to skip reading the summary metadata for each layer; skipping this step can be a significant time savings.
Note: Only GDAL 1.6+ supports querying against versions and write operations. Older versions only support querying against the base (SDE.DEFAULT) version and no writing operations.
SDE:server,instance,database,username,password[,layer]To specify a version to query against, you *must* specify a layer as well. The SDE.DEFAULT version will be used when no version name is specified.
SDE:server,instance,database,username,password,layer,[version]You can also request to create a new version if it does not already exist. If the child version already exists, it will be used unless the SDE_VERSIONOVERWRITE environment variable is set to "TRUE". In that case, the version will be deleted and recreated.
SDE:server,instance,database,username,password,layer,[parentversion],[childversion]The OGR ArcSDE driver does not support reading CAD data (treated as BLOB attribute), annotation properties, measure values at vertices, or raster data. The ExecuteSQL() method does not get passed through to the underlying database. For now it is interpreted by the limited OGR SQL handler. Spatial indexes are used to accelerate spatial queries.
The driver has been tested with ArcSDE 9.x, and should work with newer versions, as well as ArcSDE 8.2 or 8.3. Both 2D and 3D geometries are supported. Curve geometries are approximated as line strings (actually still TODO).
ArcSDE is generally sensitive to case-specific, fully-qualified tablenames. While you may be able to use short names for some operations, others (notably deleting) will require a fully-qualified name. Because of this fact, it is generally best to always use fully-qualified table names.
To select an SDTS transfer, the name of the catalog file should be used. For instance TR01CATD.DDF where the first four characters are all that typically varies.
SDTS coordinate system information is properly supported for most coordinate systems defined in SDTS.
There is no update or creation support in the SDTS driver.
Note that in TVP datasets the polygon geometry is formed from the geometry in the line modules. Primary attribute module attributes should be properly attached to their related node, line or polygon features, but can be accessed separately as their own layers.
This driver has no support for raster (DEM) SDTS datasets.
All varieties of ESRI Shapefiles should be available for reading, and simple 3D files can be created.
Normally the OGR Shapefile driver treats a whole directory of shapefiles as a dataset, and a single shapefile within that directory as a layer. In this case the directory name should be used as the dataset name. However, it is also possible to use one of the files (.shp, .shx or .dbf) in a shapefile set as the dataset name, and then it will be treated as a dataset with one layer.
Note that when reading a Shapefile of type SHPT_ARC, the corresponding layer will be reported as of type wkbLineString, but depending on the number of parts of each geometry, the actual type of the geometry for each feature can be either OGRLineString or OGRMultiLineString. The same applies for SHPT_POLYGON shapefiles, reported as layers of type wkbPolygon, but depending on the number of parts of each geometry, the actual type can be either OGRPolygon or OGRMultiPolygon.
The new ESRI measure values are discarded if encountered. MultiPatch files are read and each patch geometry is turned into a multi-polygon representation with one polygon per triangle in triangle fans and meshes.
If a .prj files in old Arc/Info style or new ESRI OGC WKT style is present, it will be read and used to associate a projection with features.
The read driver assumes that multipart polygons follow the specification, that is to say the vertices of outer rings should be oriented clockwise on the X/Y plane, and those of inner rings counterclockwise. If a Shapefile is broken w.r.t. that rule, it is possible to define the configuration option OGR_ORGANIZE_POLYGONS=DEFAULT to proceed to a full analysis based on topological relationships of the parts of the polygons so that the resulting polygons are correctly defined in the OGC Simple Feature convention.
An attempt is made to read the LDID/codepage setting from the .dbf file and use it to translate string fields to UTF-8 on read, and back when writing. LDID "87 / 0x57" is treated as ISO8859_1 which may not be appropriate. The SHAPE_ENCODING configuration option may be used to override the encoding interpretation of the shapefile with any encoding supported by CPLRecode or to "" to avoid any recoding. (Recoding support is new for GDAL/OGR 1.9.0)
The OGR Shapefile driver supports spatial indexing and a limited form of attribute indexing.
The spatial indexing uses the same .qix quadtree spatial index files that are used by UMN MapServer. It does not support the ESRI spatial index files (.sbn / .sbx). Spatial indexing can accelerate spatially filtered passes through large datasets to pick out a small area quite dramatically.
To create a spatial index, issue a SQL command of the form
CREATE SPATIAL INDEX ON tablename [DEPTH N]
where optional DEPTH specifier can be used to control number of index tree levels generated. If DEPTH is omitted, tree depth is estimated on basis of number of features in a shapefile and its value ranges from 1 to 12.
To delete a spatial index issue a command of the form
DROP SPATIAL INDEX ON tablename
Otherwise, the MapServer shptree utility can be used:
shptree <shpfile> [<depth>] [<index_format>]
More information is available about this utility at the MapServer shptree page
Currently the OGR Shapefile driver only supports attribute indexes for looking up specific values in a unique key column. To create an attribute index for a column issue an SQL command of the form "CREATE INDEX ON tablename USING fieldname". To drop the attribute indexes issue a command of the form "DROP INDEX ON tablename". The attribute index will accelerate WHERE clause searches of the form "fieldname = value". The attribute index is actually stored as a mapinfo format index and is not compatible with any other shapefile applications.
The Shapefile driver treats a directory as a dataset, and each Shapefile set (.shp, .shx, and .dbf) as a layer. The dataset name will be treated as a directory name. If the directory already exists it is used and existing files in the directory are ignored. If the directory does not exist it will be created.
As a special case attempts to create a new dataset with the extension .shp will result in a single file set being created instead of a directory.
ESRI shapefiles can only store one kind of geometry per layer (shapefile). On creation this is may be set based on the source file (if a uniform geometry type is known from the source driver), or it may be set directly by the user with the layer creation option SHPT (shown below). If not set the layer creation will fail. If geometries of incompatible types are written to the layer, the output will be terminated with an error.
Note that this can make it very difficult to translate a mixed geometry layer from another format into Shapefile format using ogr2ogr, since ogr2ogr has no support for separating out geometries from a source layer. See the FAQ for a solution.
Shapefile feature attributes are stored in an associated .dbf file, and so attributes suffer a number of limitations:
Attribute names can only be up to 10 characters long. Longer names will be silently truncated. This may result in non-unique column names, which will definitely cause problems later.
Starting with version 1.7, the OGR Shapefile driver tries to generate unique field names. Successive duplicate field names, including those created by truncation to 10 characters, will be truncated to 8 characters and appended with a serial number from 1 to 99.
For example:
Only Integer, Real, String and Date (not DateTime, just year/month/day) field types are supported. The various list, and binary field types cannot be created.
The field width and precision are directly used to establish storage size in the .dbf file. This means that strings longer than the field width, or numbers that don't fit into the indicated field format will suffer truncation.
Integer fields without an explicit width are treated as width 11.
Real (floating point) fields without an explicit width are treated as width 24 with 15 decimal places of precision.
String fields without an assigned width are treated as 80 characters.
Also, .dbf files are required to have at least one field. If none are created by the application an "FID" field will be automatically created and populated with the record number.
The OGR shapefile driver supports rewriting existing shapes in a shapefile as well as deleting shapes. Deleted shapes are marked for deletion in the .dbf file, and then ignored by OGR. To actually remove them permanently (resulting in renumbering of FIDs) invoke the SQL 'REPACK <tablename>' via the datasource ExecuteSQL() method.
Shapefiles store the layer spatial extent in the .SHP file. The layer spatial extent is automatically updated when inserting a new feature in a shapefile. However when updating an existing feature, if its previous shape was touching the bounding box of the layer extent but the updated shape does not touch the new extent, the computed extent will not be correct. It is then necessary to force a recomputation by invoking the SQL 'RECOMPUTE EXTENT ON <tablename>' via the datasource ExecuteSQL() method. The same applies for the deletion of a shape.
Note: RECOMPUTE EXTENT ON is available in OGR >= 1.9.0.
Geometry: The Shapefile format explicitly uses 32bit offsets and so cannot go over 8GB (it actually uses 32bit offsets to 16bit words). Hence, it is is not recommended to use a file size over 4GB.
Attributes: The dbf format does not have any offsets in it, so it can be arbitrarily large.
None
A merge of two shapefiles 'file1.shp' and 'file2.shp' into a new file 'file_merged.shp' is performed like this:
% ogr2ogr file_merged.shp file1.shp % ogr2ogr -update -append file_merged.shp file2.shp -nln file_merged
The second command is opening file_merged.shp in update mode, and trying to find existing layers and append the features being copied.
The -nln option sets the name of the layer to be copied to.
OGR optionally supports spatial and non-spatial tables stored in SQLite 3.x database files. SQLite is a "light weight" single file based RDBMS engine with fairly complete SQL semantics and respectible performance.
The driver looks for a geometry_columns table layed out as defined loosely according to OGC Simple Features standards, particularly as defined in FDO RFC 16 . If found it is used to map tables to layers.
If geometry_columns is not found, each table is treated as a layer. Layers with a WKT_GEOMETRY field will be treated as spatial tables, and the WKT_GEOMETRY column will be read as Well Known Text geometry.
If geometry_columns is found, it will be used to lookup spatial reference systems in the spatial_ref_sys table.
The SQLite database is essentially typeless, but the SQLite driver will attempt to classify attributes field as text, integer or floating point based on the contents of the first record in a table. None of the list attribute field types existing in SQLite.
SQLite databases often due not work well over NFS, or some other networked file system protocols due to the poor support for locking. It is safest to operate only on SQLite files on a physical disk of the local system.
SQLite is an optionally compiled in driver. It is not compiled in by default.
While the SQLite driver supports reading spatial data from records, there is no support for spatial indexing, so spatial queries will tend to be slow. Attributes queries may be fast, especially if indexes are built for appropriate attribute columns using the "CREATE INDEX ON ( )" SQL command.
By default, SQL statements are passed directly to the SQLite database engine. It's also possible to request the driver to handle SQL commands with OGR SQL engine, by passing "OGRSQL" string to the ExecuteSQL() method, as name of the SQL dialect.
Starting with OGR 1.8.0, the OGR_SQLITE_SYNCHRONOUS configuration option has been added. When set to OFF, this issues a 'PRAGMA synchronous = OFF' command to the SQLite database. This has the advantage of speeding-up some write operations (e.g. on EXT4 filesystems), but at the expense of data safety w.r.t system/OS crashes. So use it carefully in production environments and read the SQLite related documentation.
The SQLite driver can read and write SpatiaLite databases. Updating a spatialite database requires explicit linking against SpatiaLite library (version >= 2.3). Explicit linking against SpatiaLite library also provides access to functions provided by this library, such as spatial indexes, spatial functions, etc...
A few examples :
# Duplicate the sample database provided with SpatiaLite ogr2ogr -f SQLite testspatialite.sqlite test-2.3.sqlite -dsco SPATIALITE=YES # Make a request with a spatial filter. Will work faster if spatial index has # been created and explicit linking against SpatiaLite library. ogrinfo testspatialite.sqlite Towns -spat 754000 4692000 770000 4924000
The SQLite driver supports creating new SQLite database files, or adding tables to existing ones. Note that a new database file cannot be created over an existing file.
This driver reads files describing Special Use Airspaces in the Tim Newport-Peace's .SUA format
Airspace are returned as features of a single layer, with a geometry of type Polygon and the following fields : TYPE, CLASS, TITLE, TOPS, BASE.
Airspace geometries made of arcs will be tesselated.
OGR has support for SVG reading (if GDAL is built with expat library support).
Currently, it will only read SVG files that are the output from Cloudmade Vector Stream Server
All coordinates are relative to the Pseudo-mercator SRS (EPSG:3857).
The driver will return 3 layers :
TIGER/Line files are a digital database of geographic features, such as roads, railroads, rivers, lakes, political boundaries, census statistical boundaries, etc. covering the entire United States. The data base contains information about these features such as their location in latitude and longitude, the name, the type of feature, address ranges for most streets, the geographic relationship to other features, and other related information. They are the public product created from the Census Bureau's TIGER (Topologically Integrated Geographic Encoding and Referencing) data base of geographic information. TIGER was developed at the Census Bureau to support the mapping and related geographic activities required by the decennial census and sample survey programs.
Note that the TIGER/Line product does not include census demographic statistics. Those are sold by the Census Bureau in a separate format (not directly supported by FME), but those statistics do relate back to census blocks in TIGER/Line files.
To open a TIGER/Line dataset, select the directory containing one or more sets of data files. The regions are counties, or county equivelents. Each county consists of a series of files with a common basename, and different extentions. For instance, county 1 in state 26 (Michigan) consists of the following file set in Tiger98.
TGR26001.RT1 TGR26001.RT2 TGR26001.RT3 TGR26001.RT4 TGR26001.RT5 TGR26001.RT6 TGR26001.RT7 TGR26001.RT8 TGR26001.RT9 TGR26001.RTA TGR26001.RTC TGR26001.RTH TGR26001.RTI TGR26001.RTP TGR26001.RTR TGR26001.RTS TGR26001.RTZThe TIGER/Line coordinate system is hardcoded to NAD83 lat/long degrees. This should be appropriate for all recent years of TIGER/Line production.
There is no update or creation support in the TIGER/Line driver.
The reader was implemented for TIGER/Line 1998 files, but some effort has gone into ensuring compatibility with 1992, 1995, 1997, 1999, 2000, 2002, 2003 and 2004 TIGER/Line products as well. The 2005 products have also been reported to work fine. It is believe that any TIGER/Line product from the 1990's should work with the reader, with the possible loss of some version specific information.
The TIGER/Line RT (record type), and VERSION attributes are generally discarded, but the MODULE attribute is added to each feature. The MODULE attribute contains the basename (eg. TGR26001) of the county module from which the feature originated. For some keys (such as LAND, POLYID, and CENID) this MODULE attribute is needed to make the key unique when the dataset (directory) consists of more than one county of data.
Following is a list of feature types, and their relationship to the TIGER/Line product.
These features always have a line geometry.
Note that zero, one or more AltName records may exist for a given TLID, and each AltName record can contain between one and four alternate names. Because it is still very difficult to utilize AltName features to relate altername names to CompleteChains, it is anticipated that the standard reader pipeline for TIGER/Line files will be upgraded in the future, resulting in a simplification of alternate names.
These features have no associated geometry.
These features have no associated geometry.
These features have no associated geometry.
These features may have an associated point geometry. Landmarks associated with polygons will not have the polygon geometry attached. It would need to be collected (via the AreaLandmark feature) from a Polygon feature.
These features have no associated geometry.
These features have no associated geometry.
These features do not have any geometry associated with them as read by the OGR TIGER driver. It needs to be externally related using the PolyChainLink. The gdal/pymod/samples/tigerpoly.py script may be used to read a TIGER dataset and extract the polygon layer with geometry as a shapefile.
These features have no associated geometry.
These features have no associated geometry.
These features have no associated geometry.
These features have a point geometry.
These features have no associated geometry.
Points are represented as wkbPoints, lines and boundaries as wkbLineStrings and areas as wkbPolygons. wkbMulti* primitives are not used. Feature types cannot be mixed in one layer.
The virtual files are currently normally prepared by hand.
Since GDAL 1.7.0, the CreateFeature(), SetFeature() and DeleteFeature() operations are supported on a layer of a VRT dataset, if the following conditions are met :
<OGRVRTDataSource> <OGRVRTLayer name="worms"> <SrcDataSource>ODBC:DISEASE,worms</SrcDataSource> <SrcLayer>worms</SrcLayer> <GeometryType>wkbPoint</GeometryType> <LayerSRS>WGS84</LayerSRS> <GeometryField encoding="PointFromColumns" x="x" y="y"/> </OGRVRTLayer> </OGRVRTDataSource>
<OGRVRTDataSource> <OGRVRTLayer name="remapped_layer"> <SrcDataSource>your_source.shp</SrcDataSource> <SrcSQL>SELECT src_field_1 AS name, src_field_2 AS desc FROM your_source_layer_name</SrcSQL> </OGRVRTLayer> </OGRVRTDataSource>This can also be accomplished (from GDAL 1.7.0) using explicit field definitions:
<OGRVRTDataSource> <OGRVRTLayer name="remapped_layer"> <SrcDataSource>your_source.shp</SrcDataSource> <SrcLayer>your_source</SrcSQL> <Field name="name" src="src_field_1" /> <Field name="desc" src="src_field_2" type="String" width="45" /> </OGRVRTLayer> </OGRVRTDataSource>
<OGRVRTDataSource> <OGRVRTLayer name="source"> <SrcDataSource>source.shp</SrcDataSource> <SrcRegion clip="true">POLYGON((0 40,10 40,10 50,0 50,0 40))</SrcRegion> </OGRVRTLayer> </OGRVRTDataSource>
This driver can connect to a OGC WFS service. It supports WFS 1.0.0 and WFS 1.1.0 protocols. GDAL/OGR must be built with Curl support in order to the WFS driver to be compiled. Usually WFS requests return results in GML format, so the GML driver should generally be set-up for read support (thus requiring GDAL/OGR to be built with Xerces or Expat support). It is sometimes possible to use alternate underlying formats when the server supports them (such as OUTPUTFORMAT=json).
The driver supports read-only services, as well as Transactionnal ones (WFS-T).
Additionnal optional parameters can be specified such as TYPENAME , VERSION, MAXFEATURES as specified in WFS specification.
It is also possible to specify the name of an XML file whose content matches the following syntax (the <OGRWFSDataSource> element must be the first bytes of the file):
<OGRWFSDataSource> <URL>http://path/to/WFS/service[?OPTIONAL_PARAMETER1=VALUE[OPTIONNAL_PARAMETER2=VALUE]]</URL> </OGRWFSDataSource>
At the first opening, the content of the result of the GetCapabilities request will be appended to the file, so that it can be cached for later openings of the dataset. The same applies for the DescribeFeatureType request issued to discover the field definition of each layer.
The service description file has the following additional elements as immediate children of the OGRWFSDataSource element that may be optionally set.
Some servers (such as MapServer >= 6.0) support a vendor specific option, STARTINDEX, that allow to do the requests per "page", and thus to avoid downloading the whole content of the layer in a single request. The OGR WFS client will use paging when the OGR_WFS_PAGING_ALLOWED configuration option is set to ON. The page size (number of features fetched in a single request) is limited to 100 by default. It can be changed by setting the OGR_WFS_PAGE_SIZE configuration option. Those 2 options can also be set in a WFS XML description file.
Write support is only enabled when the datasource is opened in update mode.
The mapping between the operations of the WFS Transaction service and the OGR concepts is the following :
There are a few caveouts to keep in mind. OGR feature ID (FID) is an integer based value, whereas WFS/GML gml:id attribute is a string. Thus it is not always possible to match both values. The WFS driver exposes then the gml:id attribute of a feature as a 'gml_id' field.
When inserting a new feature with CreateFeature(), and if the command is successfull, OGR will fetch the returned gml:id and set the 'gml_id' field of the feature accordingly. It will also try to set the OGR FID if the gml:id is of the form layer_name.numeric_value. Otherwise the FID will be left to its unset default value.
When updating an existing feature with SetFeature(), the OGR FID field will be ignored. The request issued to the driver will only take into account the value of the gml:id field of the feature. The same applies for DeleteFeature().
It is possible to surround those operations between OGRLayer::StartTransaction() and OGRLayer::CommitTransaction(). The operations will be stored into memory and only executed at the time CommitTransaction() is called.
The drawback for CreateFeature() is that the user cannot know which gml:id have been assigned to the inserted features. A special SQL statement has been introduced into the WFS driver to workaround this : by issuing the "SELECT _LAST_INSERTED_FIDS_ FROM layer_name" (where layer_name is to be replaced with the actual layer_name) command through the OGRDataSource::ExecuteSQL(), a layer will be returned with as many rows with a single attribue gml_id as the count of inserted features during the last commited transaction.
Note : currently, only CreateFeature() makes use of OGR transaction mechanism. SetFeature() and DeleteFeature() will still be issued immediately.
"DELETE FROM layer_name WHERE expression" : this will result into a WFS delete operation. This can be a fast way of deleting one or several features. In particularly, this can be a faster replacement for OGRLayer::DeleteFeature() when the gml:id is known, but the feature has not been fetched from the server.
"SELECT _LAST_INSERTED_FIDS_ FROM layer_name" : see above paragraph.
(OGR >= 1.9.0)
A "hidden" layer called "WFSLayerMetadata" is filled with records with metadata for each WFS layer.
Each record contains a "layer_name", "title" and "abstract" field, from the document returned by GetCapabilities.
That layer is returned through GetLayerByName("WFSLayerMetadata").
ogrinfo -ro WFS:http://www2.dmsolutions.ca/cgi-bin/mswfs_gmap
ogrinfo -ro mswfs_gmap.xml
ogrinfo -ro WFS:http://www2.dmsolutions.ca/cgi-bin/mswfs_gmap popplace -spat 0 0 2961766.250000 3798856.750000
ogrinfo "WFS:http://www.tinyows.org/cgi-bin/tinyows" tows:world -ro -al -where "gml_id='world.2' or gml_id='world.3'"
ogrinfo -ro -al "WFS:http://v2.suite.opengeo.org/geoserver/ows" WFSLayerMetadata
The driver is able to read the following files :
Filename | Description | Supported versions |
---|---|---|
apt.dat | Airport data | 850, 810 |
nav.dat (or earth_nav.dat) | Navigation aids | 810, 740 |
fix.dat (or earth_fix.dat) | IFR intersections | 600 |
awy.dat (or earth_awy.dat) | Airways | 640 |
Each file will be reported as a set of layers whose data schema is given below. The data schema is generally as close as possible to the original schema data described in the X-Plane specificiation. However, please note that meters (or kilometers) are always used to report heights, elevations, distances (widths, lengths), etc., even if the original data are sometimes expressed in feet or nautical miles.
Data is reported as being expressed in WGS84 datum (latitude, longitude), altough the specificiation is not very clear on that subject.
The OGR_XPLANE_READ_WHOLE_FILE configuration option can be set to FALSE when reading a big file in regards with the available RAM (especially true for apt.dat). This option forces the driver not to cache features in RAM, but just to fetch the features of the current layer. Of course, this will have a negative impact on performance.% ogr2ogr apt_shapes apt.dat
Converting all the layers contained in 'apt.dat' into a PostreSQL database :
% PG_USE_COPY=yes ogr2ogr -overwrite -f PostgreSQL PG:"dbname=apt" apt.dat
The following layers are reported :
All the layers other than APT will refer to the airport thanks to the "apt_icao" column, that can serve as a foreign key.
Fields:
Note : when a runway has a displaced threshold, the threshold will be reported as 2 features : one at the non-displaced threshold position (is_displaced=0), and another one at the displaced threshold position (is_displaced=1).
Fields:
The source file may contain Bezier curves as sides of the polygon. Due to the lack of support for such geometry into OGR Simple Feature model, Bezier curves are discretized into linear pieces.
Fields:
Fields:
Fields:
Fields:
Fields:
Fields:
Fields:
Fields:
Fields:
The following layers are reported :
Fields :
Fields :
Fields :
Fields:
Fields:
Fields:
The following layer is reported :
The following layers are reported :
The OGRLayer class also supports applying an attribute query filter to features returned using the OGRLayer::SetAttributeFilter() method. The syntax for the attribute filter is the same as the WHERE clause in the OGR SQL SELECT statement. So everything here with regard to the WHERE clause applies in the context of the SetAttributeFilter() method.
NOTE: OGR SQL has been reimplemented for GDAL/OGR 1.8.0. Many features discussed below, notably arithmetic expressions, and expressions in the field list, were not support in GDAL/OGR 1.7.x and earlier. See RFC 28 for details of the new features in GDAL/OGR 1.8.0.
SELECT * FROM polylayer
In this case all features are fetched from the layer named "polylayer", and all attributes of those features are returned. This is essentially equivelent to accessing the layer directly. In this example the "*" is the list of fields to fetch from the layer, with "*" meaning that all fields should be fetched.
This slightly more sophisticated form still pulls all features from the layer but the schema will only contain the EAS_ID and PROP_VALUE attributes. Any other attributes would be discarded.
SELECT eas_id, prop_value FROM polylayer
A much more ambitious SELECT, restricting the features fetched with a WHERE clause, and sorting the results might look like:
SELECT * from polylayer WHERE prop_value > 220000.0 ORDER BY prop_value DESC
This select statement will produce a table with just one feature, with one attribute (named something like "count_eas_id") containing the number of distinct values of the eas_id attribute.
SELECT COUNT(DISTINCT eas_id) FROM polylayer
A special form of the field list uses the DISTINCT keyword. This returns a list of all the distinct values of the named attribute. When the DISTINCT keyword is used, only one attribute may appear in the field list. The DISTINCT keyword may be used against any type of field. Currently the distinctness test against a string value is case insensitive in OGR SQL. The result of a SELECT with a DISTINCT keyword is a layer with one column (named the same as the field operated on), and one feature per distinct value. Geometries are discarded. The distinct values are assembled in memory, so alot of memory may be used for datasets with a large number of distinct values.
SELECT DISTINCT areacode FROM polylayer
There are also several summarization operators that may be applied to columns. When a summarization operator is applied to any field, then all fields must have summarization operators applied. The summarization operators are COUNT (a count of instances), AVG (numerical average), SUM (numericla sum), MIN (lexical or numerical minimum), and MAX (lexical or numerical maximum). This example produces a variety of sumarization information on parcel property values:
SELECT MIN(prop_value), MAX(prop_value), AVG(prop_value), SUM(prop_value), COUNT(prop_value) FROM polylayer WHERE prov_name = "Ontario"
As a special case, the COUNT() operator can be given a "*" argument instead of a field name which is a short form for count all the records though it would get the same result as giving it any of the column names. It is also possible to apply the COUNT() operator to a DISTINCT SELECT to get a count of distinct values, for instance:
SELECT COUNT(DISTINCT areacode) FROM polylayer
Field names can also be prefixed by a table name though this is only really meaningful when performing joins. It is further demonstrated in the JOIN section.
Field definitions can also be complex expressions using arithmetic, and functional operators. However, the DISTINCT keyword, and summarization operators like MIN, MAX, AVG and SUM may not be applied to expression fields.
SELECT cost+tax from invoice
or
SELECT CONCAT(owner_first_name, ' ',owner_last_name) from properties
SELECT *, OGR_STYLE AS STYLE FROM polylayer
The field name alias can be used as the last operation in the column specification. Therefore we cannot rename the fields inside an operator, but we can rename whole column expression, like these two:
SELECT COUNT(areacode) AS 'count' FROM polylayer SELECT dollars/100.0 AS cents FROM polylayer
SELECT *, CAST(OGR_STYLE AS character(255)) FROM rivers
Currently casting to the following target types are supported:
Specifying the field_length and/or the field_precision is optional. An explicit value of zero can be used as the width for character() to indicate variable width. Conversion to the 'integer list', 'double list' and 'string list' OGR data types are not supported, which doesn't conform to the SQL92 specification.
While the CAST operator can be applied anywhere in an expression, including in a WHERE clause, the detailed control of output field format is only supported if the CAST operator is the "outer most" operators on a field in the field definition list. In other contexts it is still useful to convert between numeric, string and date data types.
In addition to the arithmetic and other functional operators available in expressions in the field selection clause of the SELECT statement, in the WHERE context logical operators are also available and the evaluated value of the expression should be logical (true or false).
The available logical operators are =, !=, <>, <, >, <=, >=, LIKE and ILIKE, BETWEEN and IN. Most of the operators are self explanitory, but is is worth nothing that != is the same as <>, the string equality is case insensitive, but the <, >, <= and >= operators are case sensitive. Both the LIKE and ILIKE operators are case insensitive.
The value argument to the LIKE operator is a pattern against which the value string is matched. In this pattern percent (%) matches any number of characters, and underscore ( _ ) matches any one character. An optional ESCAPE escape_char clause can be added so that the percent or underscore characters can be searched as regular characters, by being preceded with the escape_char.
String Pattern Matches? ------ ------- -------- Alberta ALB% Yes Alberta _lberta Yes St. Alberta _lberta No St. Alberta %lberta Yes Robarts St. %Robarts% Yes 12345 123%45 Yes 123.45 12?45 No N0N 1P0 %N0N% Yes L4C 5E2 %N0N% No
The IN takes a list of values as it's argument and tests the attribute value for membership in the provided set.
Value Value Set Matches? ------ ------- -------- 321 IN (456,123) No "Ontario" IN ("Ontario","BC") Yes "Ont" IN ("Ontario","BC") No 1 IN (0,2,4,6) No
The syntax of the BETWEEN operator is "field_name BETWEEN value1 AND value2" and it is equivalent to "field_name >= value1 AND field_name <= value2".
In addition to the above binary operators, there are additional operators for testing if a field is null or not. These are the IS NULL and IS NOT NULL operators.
Basic field tests can be combined in more complicated predicates using logical operators include AND, OR, and the unary logical NOT. Subexpressions should be bracketed to make precidence clear. Some more complicated predicates are:
SELECT * FROM poly WHERE (prop_value >= 100000) AND (prop_value < 200000) SELECT * FROM poly WHERE NOT (area_code LIKE "N0N%") SELECT * FROM poly WHERE (prop_value IS NOT NULL) AND (prop_value < 100000)
SELECT * FROM property WHERE class_code = 7 ORDER BY prop_value DESC SELECT * FROM property ORDER BY prop_value SELECT * FROM property ORDER BY prop_value ASC SELECT DISTINCT zip_code FROM property ORDER BY zip_code
Note that ORDER BY clauses cause two passes through the feature set. One to build an in-memory table of field values corresponded with feature ids, and a second pass to fetch the features by feature id in the sorted order. For formats which cannot efficiently randomly read features by feature id this can be a very expensive operation.
Sorting of string field values is case sensitive, not case insensitive like in most other parts of OGR SQL.
SELECT city.*, nation.name FROM city LEFT JOIN nation ON city.nation_id = nation.id
This query would result in a table with all the fields from the city table, and an additional "nation.name" field with the nation name pulled from the nation table by looking for the record in the nation table that has the "id" field with the same value as the city.nation_id field.
Joins introduce a number of additional issues. One is the concept of table qualifiers on field names. For instance, referring to city.nation_id instead of just nation_id to indicate the nation_id field from the city layer. The table name qualifiers may only be used in the field list, and within the ON clause of the join.
Wildcards are also somewhat more involved. All fields from the primary table (city in this case) and the secondary table ( nation in this case) may be selected using the usual * wildcard. But the fields of just one of the primary or secondary table may be selected by prefixing the asterix with the table name.
The field names in the resulting query layer will be qualified by the table name, if the table name is given as a qualifier in the field list. In addition field names will be qualified with a table name if they would conflict with earlier fields. For instance, the following select would result might result in a results set with a name, nation_id, nation.nation_id and nation.name field if the city and nation tables both have the nation_id and name fieldnames.
SELECT * FROM city LEFT JOIN nation ON city.nation_id = nation.nation_id
On the other hand if the nation table had a continent_id field, but the city table did not, then that field would not need to be qualified in the result set. However, if the selected instead looked like the following statement, all result fields would be qualified by the table name.
SELECT city.*, nation.* FROM city LEFT JOIN nation ON city.nation_id = nation.nation_id
In the above examples, the nation table was found in the same datasource as the city table. However, the OGR join support includes the ability to join against a table in a different data source, potentially of a different format. This is indicated by qualifying the secondary table name with a datasource name. In this case the secondary datasource is opened using normal OGR semantics and utilized to access the secondary table untill the query result is no longer needed.
SELECT * FROM city LEFT JOIN '/usr2/data/nation.dbf'.nation ON city.nation_id = nation.nation_id
While not necessarily very useful, it is also possible to introduce table aliases to simplify some SELECT statements. This can also be useful to disambiguate situations where ables of the same name are being used from different data sources. For instance, if the actual tables names were messy we might want to do something like:
SELECT c.name, n.name FROM project_615_city c LEFT JOIN '/usr2/data/project_615_nation.dbf'.project_615_nation n ON c.nation_id = n.nation_id
It is possible to do multiple joins in a single query.
SELECT city.name, prov.name, nation.name FROM city LEFT JOIN province ON city.prov_id = province.id LEFT JOIN nation ON city.nation_id = nation.id
SELECT FID, * FROM nation
SELECT * FROM nation WHERE OGR_GEOMETRY= 'POINT' OR OGR_GEOMETRY= 'POLYGON'
SELECT OGR_GEOM_WKT, * FROM nation
Using the OGR_GEOM_WKT and the LIKE operator in the WHERE clause we can get similar effect as using OGR_GEOMETRY:
SELECT OGR_GEOM_WKT, * FROM nation WHERE OGR_GEOM_WKT LIKE 'POINT%' OR OGR_GEOM_WKT LIKE 'POLYGON%'
The OGR_GEOM_AREA special field returns the area of the feature's geometry computed by the OGRSurface::get_Area() method. For OGRGeometryCollection and OGRMultiPolygon the value is the sum of the areas of its members. For non-surface geometries the returned area is 0.0.
For example, to select only polygon features larger than a given area:
SELECT * FROM nation WHERE OGR_GEOM_AREA > 10000000 '
SELECT * FROM nation WHERE OGR_STYLE LIKE 'LABEL%'
CREATE INDEX ON nation USING nation_id
DROP INDEX ON nation USING nation_id DROP INDEX ON nation
OGRLayer * OGRDataSource::ExecuteSQL( const char *pszSQLCommand, OGRGeometry *poSpatialFilter, const char *pszDialect );
The pszDialect argument is in theory intended to allow for support of different command languages against a provider, but for now applications should always pass an empty (not NULL) string to get the default dialect.
The poSpatialFilter argument is a geometry used to select a bounding rectangle for features to be returned in a manner similar to the OGRLayer::SetSpatialFilter() method. It may be NULL for no special spatial restriction.
The result of an ExecuteSQL() call is usually a temporary OGRLayer representing the results set from the statement. This is the case for a SELECT statement for instance. The returned temporary layer should be released with OGRDataSource::ReleaseResultsSet() method when no longer needed. Failure to release it before the datasource is destroyed may result in a crash.