OTB.TrainVectorClassifier: Train a classifier based on labeled geometries and a list of features to consider.

This application trains a classifier based on labeled geometries and a list of features to consider for classification.This application is based on LibSVM, OpenCV Machine Learning (2.3.1 and later), and Shark ML The output of this application is a text model file, whose format corresponds to the ML model type chosen. There is no image nor vector data output.

Inputs

Input geometries used for training (note : all geometries from the layer will be used)

format
href
Please set a value for io.vd.

XML file containing mean and variance of each feature.

format
href
Please set a value for io.stats.

Index of the layer to use in the input vector file.

integer

List of field names in the input vector data to be used as features for training.

string
Please set a value for feat.

Geometries used for validation (must contain the same fields used for training, all geometries from the layer will be used)

format
href
Please set a value for valid.vd.

Index of the layer to use in the validation vector file.

integer

Field containing the class id for supervision. The values in this field shall be cast into integers. Only geometries with this field available will be taken into account.

string
Please set a value for cfield.

Verbose mode, display the contingency table result.

boolean
Please set a value for v.

Choice of the classifier to use for the training.

string
Please set a value for classifier.

SVM Kernel Type.

string
Please set a value for classifier.libsvm.k.

Type of SVM formulation.

string
Please set a value for classifier.libsvm.m.

SVM models have a cost parameter C (1 by default) to control the trade-off between training errors and forcing rigid margins.

number
Please set a value for classifier.libsvm.c.

Cost parameter Nu, in the range 0..1, the larger the value, the smoother the decision.

number
Please set a value for classifier.libsvm.nu.

SVM parameters optimization flag.

boolean
Please set a value for classifier.libsvm.opt.

Probability estimation flag.

boolean
Please set a value for classifier.libsvm.prob.

Type of Boosting algorithm.

string
Please set a value for classifier.boost.t.

The number of weak classifiers.

integer
Please set a value for classifier.boost.w.

A threshold between 0 and 1 used to save computational time. Samples with summary weight <= (1 - weight_trim_rate) do not participate in the next iteration of training. Set this parameter to 0 to turn off this functionality.

number
Please set a value for classifier.boost.r.

Maximum depth of the tree.

integer
Please set a value for classifier.boost.m.

The training algorithm attempts to split each node while its depth is smaller than the maximum possible depth of the tree. The actual depth may be smaller if the other termination criteria are met, and/or if the tree is pruned.

integer
Please set a value for classifier.dt.max.

If the number of samples in a node is smaller than this parameter, then this node will not be split.

integer
Please set a value for classifier.dt.min.

If all absolute differences between an estimated value in a node and the values of the train samples in this node are smaller than this regression accuracy parameter, then the node will not be split further.

number
Please set a value for classifier.dt.ra.

Cluster possible values of a categorical variable into K <= cat clusters to find a suboptimal split.

integer
Please set a value for classifier.dt.cat.

If cv_folds > 1, then it prunes a tree with K-fold cross-validation where K is equal to cv_folds.

integer
Please set a value for classifier.dt.f.

If true, then a pruning will be harsher. This will make a tree more compact and more resistant to the training data noise but a bit less accurate.

boolean
Please set a value for classifier.dt.r.

If true, then pruned branches are physically removed from the tree.

boolean
Please set a value for classifier.dt.t.

Type of training method for the multilayer perceptron (MLP) neural network.

string
Please set a value for classifier.ann.t.

The number of neurons in each intermediate layer (excluding input and output layers).

string
Please set a value for classifier.ann.sizes.

This function determine whether the output of the node is positive or not depending on the output of the transfert function.

string
Please set a value for classifier.ann.f.

Alpha parameter of the activation function (used only with sigmoid and gaussian functions).

number
Please set a value for classifier.ann.a.

Beta parameter of the activation function (used only with sigmoid and gaussian functions).

number
Please set a value for classifier.ann.b.

Strength of the weight gradient term in the BACKPROP method. The recommended value is about 0.1.

number
Please set a value for classifier.ann.bpdw.

Strength of the momentum term (the difference between weights on the 2 previous iterations). This parameter provides some inertia to smooth the random fluctuations of the weights. It can vary from 0 (the feature is disabled) to 1 and beyond. The value 0.1 or so is good enough.

number
Please set a value for classifier.ann.bpms.

Initial value Delta_0 of update-values Delta_

number
Please set a value for classifier.ann.rdw.

Update-values lower limit Delta_

number
Please set a value for classifier.ann.rdwm.

Termination criteria.

string
Please set a value for classifier.ann.term.

Epsilon value used in the Termination criteria.

number
Please set a value for classifier.ann.eps.

Maximum number of iterations used in the Termination criteria.

integer
Please set a value for classifier.ann.iter.

The depth of the tree. A low value will likely underfit and conversely a high value will likely overfit. The optimal value can be obtained using cross validation or other suitable methods.

integer
Please set a value for classifier.rf.max.

If the number of samples in a node is smaller than this parameter, then the node will not be split. A reasonable value is a small percentage of the total data e.g. 1 percent.

integer
Please set a value for classifier.rf.min.

If all absolute differences between an estimated value in a node and the values of the train samples in this node are smaller than this regression accuracy parameter, then the node will not be split.

number
Please set a value for classifier.rf.ra.

Cluster possible values of a categorical variable into K <= cat clusters to find a suboptimal split.

integer
Please set a value for classifier.rf.cat.

The size of the subset of features, randomly selected at each tree node, that are used to find the best split(s). If you set it to 0, then the size will be set to the square root of the total number of features.

integer
Please set a value for classifier.rf.var.

The maximum number of trees in the forest. Typically, the more trees you have, the better the accuracy. However, the improvement in accuracy generally diminishes and reaches an asymptote for a certain number of trees. Also to keep in mind, increasing the number of trees increases the prediction time linearly.

integer
Please set a value for classifier.rf.nbtrees.

Sufficient accuracy (OOB error).

number
Please set a value for classifier.rf.acc.

The number of neighbors to use.

integer
Please set a value for classifier.knn.k.

Set specific seed. with integer value.

integer

Outputs

Output file containing the model estimated (.txt format).

format
transmission

Output file containing the confusion matrix or contingency table (.csv format).The contingency table is output when we unsupervised algorithms is used otherwise the confusion matrix is output.

format
transmission

Execution options

successUri
inProgressUri
failedUri

format

mode

Execute End Point

View the execution endpoint of a process.

View the alternative version in HTML.

{"id": "OTB.TrainVectorClassifier", "title": "Train a classifier based on labeled geometries and a list of features to consider.", "description": "This application trains a classifier based on labeled geometries and a list of features to consider for classification.This application is based on LibSVM, OpenCV Machine Learning (2.3.1 and later), and Shark ML The output of this application is a text model file, whose format corresponds to the ML model type chosen. There is no image nor vector data output.", "version": "1.0.0", "jobControlOptions": ["sync-execute", "async-execute", "dismiss"], "outputTransmission": ["value", "reference"], "links": [{"rel": "http://www.opengis.net/def/rel/ogc/1.0/execute", "type": "application/json", "title": "Execute End Point", "href": "http://tb17.geolabs.fr:8119/ogc-api/processes/OTB.TrainVectorClassifier/execution"}, {"rel": "alternate", "type": "text/html", "title": "Execute End Point", "href": "http://tb17.geolabs.fr:8119/ogc-api/processes/OTB.TrainVectorClassifier/execution.html"}], "inputs": {"io.vd": {"title": "Input geometries used for training (note : all geometries from the layer will be used)", "description": "Input geometries used for training (note : all geometries from the layer will be used)", "maxOccurs": 1024, "extended-schema": {"type": "array", "minItems": 1, "maxItems": 1024, "items": {"oneOf": [{"allOf": [{"$ref": "http://zoo-project.org/dl/link.json"}, {"type": "object", "properties": {"type": {"enum": ["text/xml", "application/vnd.google-earth.kml+xml", "application/json", "application/zip"]}}}]}, {"type": "object", "required": ["value"], "properties": {"value": {"oneOf": [{"type": "string", "contentEncoding": "utf-8", "contentMediaType": "text/xml"}, {"type": "string", "contentEncoding": "utf-8", "contentMediaType": "application/vnd.google-earth.kml+xml"}, {"type": "object"}, {"type": "string", "contentEncoding": "base64", "contentMediaType": "application/zip"}]}}}]}}, "schema": {"oneOf": [{"type": "string", "contentEncoding": "utf-8", "contentMediaType": "text/xml"}, {"type": "string", "contentEncoding": "utf-8", "contentMediaType": "application/vnd.google-earth.kml+xml"}, {"type": "object"}, {"type": "string", "contentEncoding": "base64", "contentMediaType": "application/zip"}]}, "id": "io.vd"}, "io.stats": {"title": "XML file containing mean and variance of each feature.", "description": "XML file containing mean and variance of each feature.", "extended-schema": {"oneOf": [{"allOf": [{"$ref": "http://zoo-project.org/dl/link.json"}, {"type": "object", "properties": {"type": {"enum": ["text/xml"]}}}]}, {"type": "object", "required": ["value"], "properties": {"value": {"oneOf": [{"type": "string", "contentEncoding": "utf-8", "contentMediaType": "text/xml"}]}}}], "nullable": true}, "schema": {"oneOf": [{"type": "string", "contentEncoding": "utf-8", "contentMediaType": "text/xml"}]}, "id": "io.stats"}, "layer": {"title": "Index of the layer to use in the input vector file.", "description": "Index of the layer to use in the input vector file.", "schema": {"type": "integer", "default": 0, "nullable": true}, "id": "layer"}, "feat": {"title": "List of field names in the input vector data to be used as features for training.", "description": "List of field names in the input vector data to be used as features for training.", "maxOccurs": 1024, "schema": {"type": "string"}, "id": "feat"}, "valid.vd": {"title": "Geometries used for validation (must contain the same fields used for training, all geometries from the layer will be used)", "description": "Geometries used for validation (must contain the same fields used for training, all geometries from the layer will be used)", "maxOccurs": 1024, "extended-schema": {"type": "array", "minItems": 0, "maxItems": 1024, "items": {"oneOf": [{"allOf": [{"$ref": "http://zoo-project.org/dl/link.json"}, {"type": "object", "properties": {"type": {"enum": ["text/xml", "application/vnd.google-earth.kml+xml", "application/json", "application/zip"]}}}]}, {"type": "object", "required": ["value"], "properties": {"value": {"oneOf": [{"type": "string", "contentEncoding": "utf-8", "contentMediaType": "text/xml"}, {"type": "string", "contentEncoding": "utf-8", "contentMediaType": "application/vnd.google-earth.kml+xml"}, {"type": "object"}, {"type": "string", "contentEncoding": "base64", "contentMediaType": "application/zip"}]}}}]}, "nullable": true}, "schema": {"oneOf": [{"type": "string", "contentEncoding": "utf-8", "contentMediaType": "text/xml"}, {"type": "string", "contentEncoding": "utf-8", "contentMediaType": "application/vnd.google-earth.kml+xml"}, {"type": "object"}, {"type": "string", "contentEncoding": "base64", "contentMediaType": "application/zip"}]}, "id": "valid.vd"}, "valid.layer": {"title": "Index of the layer to use in the validation vector file.", "description": "Index of the layer to use in the validation vector file.", "schema": {"type": "integer", "default": 0, "nullable": true}, "id": "valid.layer"}, "cfield": {"title": "Field containing the class id for supervision. The values in this field shall be cast into integers. Only geometries with this field available will be taken into account.", "description": "Field containing the class id for supervision. The values in this field shall be cast into integers. Only geometries with this field available will be taken into account.", "maxOccurs": 1024, "schema": {"type": "string"}, "id": "cfield"}, "v": {"title": "Verbose mode, display the contingency table result.", "description": "Verbose mode, display the contingency table result.", "schema": {"type": "boolean", "default": false}, "id": "v"}, "classifier": {"title": "Choice of the classifier to use for the training.", "description": "Choice of the classifier to use for the training.", "schema": {"type": "string", "default": "libsvm", "enum": ["libsvm", "boost", "dt", "ann", "bayes", "rf", "knn"]}, "id": "classifier"}, "classifier.libsvm.k": {"title": "SVM Kernel Type.", "description": "SVM Kernel Type.", "schema": {"type": "string", "default": "linear", "enum": ["linear", "rbf", "poly", "sigmoid"]}, "id": "classifier.libsvm.k"}, "classifier.libsvm.m": {"title": "Type of SVM formulation.", "description": "Type of SVM formulation.", "schema": {"type": "string", "default": "csvc", "enum": ["csvc", "nusvc", "oneclass"]}, "id": "classifier.libsvm.m"}, "classifier.libsvm.c": {"title": "SVM models have a cost parameter C (1 by default) to control the trade-off between training errors and forcing rigid margins.", "description": "SVM models have a cost parameter C (1 by default) to control the trade-off between training errors and forcing rigid margins.", "schema": {"type": "number", "default": 1, "format": "double"}, "id": "classifier.libsvm.c"}, "classifier.libsvm.nu": {"title": "Cost parameter Nu, in the range 0..1, the larger the value, the smoother the decision.", "description": "Cost parameter Nu, in the range 0..1, the larger the value, the smoother the decision.", "schema": {"type": "number", "default": 0.5, "format": "double"}, "id": "classifier.libsvm.nu"}, "classifier.libsvm.opt": {"title": "SVM parameters optimization flag.", "description": "SVM parameters optimization flag.", "schema": {"type": "boolean", "default": false}, "id": "classifier.libsvm.opt"}, "classifier.libsvm.prob": {"title": "Probability estimation flag.", "description": "Probability estimation flag.", "schema": {"type": "boolean", "default": false}, "id": "classifier.libsvm.prob"}, "classifier.boost.t": {"title": "Type of Boosting algorithm.", "description": "Type of Boosting algorithm.", "schema": {"type": "string", "default": "real", "enum": ["discrete", "real", "logit", "gentle"]}, "id": "classifier.boost.t"}, "classifier.boost.w": {"title": "The number of weak classifiers.", "description": "The number of weak classifiers.", "schema": {"type": "integer", "default": 100}, "id": "classifier.boost.w"}, "classifier.boost.r": {"title": "A threshold between 0 and 1 used to save computational time. Samples with summary weight <= (1 - weight_trim_rate) do not participate in the next iteration of training. Set this parameter to 0 to turn off this functionality.", "description": "A threshold between 0 and 1 used to save computational time. Samples with summary weight <= (1 - weight_trim_rate) do not participate in the next iteration of training. Set this parameter to 0 to turn off this functionality.", "schema": {"type": "number", "default": 0.95, "format": "double"}, "id": "classifier.boost.r"}, "classifier.boost.m": {"title": "Maximum depth of the tree.", "description": "Maximum depth of the tree.", "schema": {"type": "integer", "default": 1}, "id": "classifier.boost.m"}, "classifier.dt.max": {"title": "The training algorithm attempts to split each node while its depth is smaller than the maximum possible depth of the tree. The actual depth may be smaller if the other termination criteria are met, and/or if the tree is pruned.", "description": "The training algorithm attempts to split each node while its depth is smaller than the maximum possible depth of the tree. The actual depth may be smaller if the other termination criteria are met, and/or if the tree is pruned.", "schema": {"type": "integer", "default": 10}, "id": "classifier.dt.max"}, "classifier.dt.min": {"title": "If the number of samples in a node is smaller than this parameter, then this node will not be split.", "description": "If the number of samples in a node is smaller than this parameter, then this node will not be split.", "schema": {"type": "integer", "default": 10}, "id": "classifier.dt.min"}, "classifier.dt.ra": {"title": "If all absolute differences between an estimated value in a node and the values of the train samples in this node are smaller than this regression accuracy parameter, then the node will not be split further.", "description": "If all absolute differences between an estimated value in a node and the values of the train samples in this node are smaller than this regression accuracy parameter, then the node will not be split further.", "schema": {"type": "number", "default": 0.01, "format": "double"}, "id": "classifier.dt.ra"}, "classifier.dt.cat": {"title": "Cluster possible values of a categorical variable into K <= cat clusters to find a suboptimal split.", "description": "Cluster possible values of a categorical variable into K <= cat clusters to find a suboptimal split.", "schema": {"type": "integer", "default": 10}, "id": "classifier.dt.cat"}, "classifier.dt.f": {"title": "If cv_folds > 1, then it prunes a tree with K-fold cross-validation where K is equal to cv_folds.", "description": "If cv_folds > 1, then it prunes a tree with K-fold cross-validation where K is equal to cv_folds.", "schema": {"type": "integer", "default": 0}, "id": "classifier.dt.f"}, "classifier.dt.r": {"title": "If true, then a pruning will be harsher. This will make a tree more compact and more resistant to the training data noise but a bit less accurate.", "description": "If true, then a pruning will be harsher. This will make a tree more compact and more resistant to the training data noise but a bit less accurate.", "schema": {"type": "boolean", "default": false}, "id": "classifier.dt.r"}, "classifier.dt.t": {"title": "If true, then pruned branches are physically removed from the tree.", "description": "If true, then pruned branches are physically removed from the tree.", "schema": {"type": "boolean", "default": false}, "id": "classifier.dt.t"}, "classifier.ann.t": {"title": "Type of training method for the multilayer perceptron (MLP) neural network.", "description": "Type of training method for the multilayer perceptron (MLP) neural network.", "schema": {"type": "string", "default": "reg", "enum": ["back", "reg"]}, "id": "classifier.ann.t"}, "classifier.ann.sizes": {"title": "The number of neurons in each intermediate layer (excluding input and output layers).", "description": "The number of neurons in each intermediate layer (excluding input and output layers).", "maxOccurs": 1024, "schema": {"type": "string"}, "id": "classifier.ann.sizes"}, "classifier.ann.f": {"title": "This function determine whether the output of the node is positive or not depending on the output of the transfert function.", "description": "This function determine whether the output of the node is positive or not depending on the output of the transfert function.", "schema": {"type": "string", "default": "sig", "enum": ["ident", "sig", "gau"]}, "id": "classifier.ann.f"}, "classifier.ann.a": {"title": "Alpha parameter of the activation function (used only with sigmoid and gaussian functions).", "description": "Alpha parameter of the activation function (used only with sigmoid and gaussian functions).", "schema": {"type": "number", "default": 1, "format": "double"}, "id": "classifier.ann.a"}, "classifier.ann.b": {"title": "Beta parameter of the activation function (used only with sigmoid and gaussian functions).", "description": "Beta parameter of the activation function (used only with sigmoid and gaussian functions).", "schema": {"type": "number", "default": 1, "format": "double"}, "id": "classifier.ann.b"}, "classifier.ann.bpdw": {"title": "Strength of the weight gradient term in the BACKPROP method. The recommended value is about 0.1.", "description": "Strength of the weight gradient term in the BACKPROP method. The recommended value is about 0.1.", "schema": {"type": "number", "default": 0.1, "format": "double"}, "id": "classifier.ann.bpdw"}, "classifier.ann.bpms": {"title": "Strength of the momentum term (the difference between weights on the 2 previous iterations). This parameter provides some inertia to smooth the random fluctuations of the weights. It can vary from 0 (the feature is disabled) to 1 and beyond. The value 0.1 or so is good enough.", "description": "Strength of the momentum term (the difference between weights on the 2 previous iterations). This parameter provides some inertia to smooth the random fluctuations of the weights. It can vary from 0 (the feature is disabled) to 1 and beyond. The value 0.1 or so is good enough.", "schema": {"type": "number", "default": 0.1, "format": "double"}, "id": "classifier.ann.bpms"}, "classifier.ann.rdw": {"title": "Initial value Delta_0 of update-values Delta_", "description": "Initial value Delta_0 of update-values Delta_", "schema": {"type": "number", "default": 0.1, "format": "double"}, "id": "classifier.ann.rdw"}, "classifier.ann.rdwm": {"title": "Update-values lower limit Delta_", "description": "Update-values lower limit Delta_", "schema": {"type": "number", "default": 1e-07, "format": "double"}, "id": "classifier.ann.rdwm"}, "classifier.ann.term": {"title": "Termination criteria.", "description": "Termination criteria.", "schema": {"type": "string", "default": "all", "enum": ["iter", "eps", "all"]}, "id": "classifier.ann.term"}, "classifier.ann.eps": {"title": "Epsilon value used in the Termination criteria.", "description": "Epsilon value used in the Termination criteria.", "schema": {"type": "number", "default": 0.01, "format": "double"}, "id": "classifier.ann.eps"}, "classifier.ann.iter": {"title": "Maximum number of iterations used in the Termination criteria.", "description": "Maximum number of iterations used in the Termination criteria.", "schema": {"type": "integer", "default": 1000}, "id": "classifier.ann.iter"}, "classifier.rf.max": {"title": "The depth of the tree. A low value will likely underfit and conversely a high value will likely overfit. The optimal value can be obtained using cross validation or other suitable methods.", "description": "The depth of the tree. A low value will likely underfit and conversely a high value will likely overfit. The optimal value can be obtained using cross validation or other suitable methods.", "schema": {"type": "integer", "default": 5}, "id": "classifier.rf.max"}, "classifier.rf.min": {"title": "If the number of samples in a node is smaller than this parameter, then the node will not be split. A reasonable value is a small percentage of the total data e.g. 1 percent.", "description": "If the number of samples in a node is smaller than this parameter, then the node will not be split. A reasonable value is a small percentage of the total data e.g. 1 percent.", "schema": {"type": "integer", "default": 10}, "id": "classifier.rf.min"}, "classifier.rf.ra": {"title": "If all absolute differences between an estimated value in a node and the values of the train samples in this node are smaller than this regression accuracy parameter, then the node will not be split.", "description": "If all absolute differences between an estimated value in a node and the values of the train samples in this node are smaller than this regression accuracy parameter, then the node will not be split.", "schema": {"type": "number", "default": 0, "format": "double"}, "id": "classifier.rf.ra"}, "classifier.rf.cat": {"title": "Cluster possible values of a categorical variable into K <= cat clusters to find a suboptimal split.", "description": "Cluster possible values of a categorical variable into K <= cat clusters to find a suboptimal split.", "schema": {"type": "integer", "default": 10}, "id": "classifier.rf.cat"}, "classifier.rf.var": {"title": "The size of the subset of features, randomly selected at each tree node, that are used to find the best split(s). If you set it to 0, then the size will be set to the square root of the total number of features.", "description": "The size of the subset of features, randomly selected at each tree node, that are used to find the best split(s). If you set it to 0, then the size will be set to the square root of the total number of features.", "schema": {"type": "integer", "default": 0}, "id": "classifier.rf.var"}, "classifier.rf.nbtrees": {"title": "The maximum number of trees in the forest. Typically, the more trees you have, the better the accuracy. However, the improvement in accuracy generally diminishes and reaches an asymptote for a certain number of trees. Also to keep in mind, increasing the number of trees increases the prediction time linearly.", "description": "The maximum number of trees in the forest. Typically, the more trees you have, the better the accuracy. However, the improvement in accuracy generally diminishes and reaches an asymptote for a certain number of trees. Also to keep in mind, increasing the number of trees increases the prediction time linearly.", "schema": {"type": "integer", "default": 100}, "id": "classifier.rf.nbtrees"}, "classifier.rf.acc": {"title": "Sufficient accuracy (OOB error).", "description": "Sufficient accuracy (OOB error).", "schema": {"type": "number", "default": 0.01, "format": "double"}, "id": "classifier.rf.acc"}, "classifier.knn.k": {"title": "The number of neighbors to use.", "description": "The number of neighbors to use.", "schema": {"type": "integer", "default": 32}, "id": "classifier.knn.k"}, "rand": {"title": "Set specific seed. with integer value.", "description": "Set specific seed. with integer value.", "schema": {"type": "integer", "nullable": true}, "id": "rand"}}, "outputs": {"io.out": {"title": "Output file containing the model estimated (.txt format).", "description": "Output file containing the model estimated (.txt format).", "extended-schema": {"oneOf": [{"allOf": [{"$ref": "http://zoo-project.org/dl/link.json"}, {"type": "object", "properties": {"type": {"enum": ["text/xml", "text/plain"]}}}]}, {"type": "object", "required": ["value"], "properties": {"value": {"oneOf": [{"type": "string", "contentEncoding": "utf-8", "contentMediaType": "text/xml"}, {"type": "string", "contentEncoding": "utf-8", "contentMediaType": "text/plain"}]}}}]}, "schema": {"oneOf": [{"type": "string", "contentEncoding": "utf-8", "contentMediaType": "text/xml"}, {"type": "string", "contentEncoding": "utf-8", "contentMediaType": "text/plain"}]}, "id": "io.out"}, "io.confmatout": {"title": "Output file containing the confusion matrix or contingency table (.csv format).The contingency table is output when we unsupervised algorithms is used otherwise the confusion matrix is output.", "description": "Output file containing the confusion matrix or contingency table (.csv format).The contingency table is output when we unsupervised algorithms is used otherwise the confusion matrix is output.", "extended-schema": {"oneOf": [{"allOf": [{"$ref": "http://zoo-project.org/dl/link.json"}, {"type": "object", "properties": {"type": {"enum": ["text/csv"]}}}]}, {"type": "object", "required": ["value"], "properties": {"value": {"oneOf": [{"type": "string", "contentEncoding": "utf-8", "contentMediaType": "text/csv"}]}}}]}, "schema": {"oneOf": [{"type": "string", "contentEncoding": "utf-8", "contentMediaType": "text/csv"}]}, "id": "io.confmatout"}}}

http://tb17.geolabs.fr:8119/ogc-api/processes/OTB.TrainVectorClassifier.html
Last modified: Sat Feb 19 15:43:34 CET 2022