public final class QuantileDiscretizer extends Estimator<Bucketizer> implements QuantileDiscretizerBase, DefaultParamsWritable
QuantileDiscretizer
takes a column with continuous features and outputs a column with binned
categorical features. The number of bins can be set using the numBuckets
parameter. It is
possible that the number of buckets used will be smaller than this value, for example, if there
are too few distinct values of the input to create enough distinct quantiles.
Since 2.3.0, QuantileDiscretizer
can map multiple columns at once by setting the inputCols
parameter. If both of the inputCol
and inputCols
parameters are set, an Exception will be
thrown. To specify the number of buckets for each column, the numBucketsArray
parameter can
be set, or if the number of buckets should be the same across columns, numBuckets
can be
set as a convenience.
NaN handling:
null and NaN values will be ignored from the column during QuantileDiscretizer
fitting. This
will produce a Bucketizer
model for making predictions. During the transformation,
Bucketizer
will raise an error when it finds NaN values in the dataset, but the user can
also choose to either keep or remove NaN values within the dataset by setting handleInvalid
.
If the user chooses to keep NaN values, they will be handled specially and placed into their own
bucket, for example, if 4 buckets are used, then non-NaN data will be put into buckets[0-3],
but NaNs will be counted in a special bucket[4].
Algorithm: The bin ranges are chosen using an approximate algorithm (see the documentation for
org.apache.spark.sql.DataFrameStatFunctions.approxQuantile
for a detailed description). The precision of the approximation can be controlled with the
relativeError
parameter. The lower and upper bin bounds will be -Infinity
and +Infinity
,
covering all real values.
Constructor and Description |
---|
QuantileDiscretizer() |
QuantileDiscretizer(String uid) |
Modifier and Type | Method and Description |
---|---|
QuantileDiscretizer |
copy(ParamMap extra)
Creates a copy of this instance with the same UID and some extra params.
|
Bucketizer |
fit(Dataset<?> dataset)
Fits a model to the input data.
|
Param<String> |
handleInvalid()
Param for how to handle invalid entries.
|
Param<String> |
inputCol()
Param for input column name.
|
StringArrayParam |
inputCols()
Param for input column names.
|
static QuantileDiscretizer |
load(String path) |
IntParam |
numBuckets()
Number of buckets (quantiles, or categories) into which data points are grouped.
|
IntArrayParam |
numBucketsArray()
Array of number of buckets (quantiles, or categories) into which data points are grouped.
|
static void |
org$apache$spark$internal$Logging$$log__$eq(org.slf4j.Logger x$1) |
static org.slf4j.Logger |
org$apache$spark$internal$Logging$$log_() |
Param<String> |
outputCol()
Param for output column name.
|
StringArrayParam |
outputCols()
Param for output column names.
|
static MLReader<T> |
read() |
DoubleParam |
relativeError()
Relative error (see documentation for
org.apache.spark.sql.DataFrameStatFunctions.approxQuantile for description)
Must be in the range [0, 1]. |
QuantileDiscretizer |
setHandleInvalid(String value) |
QuantileDiscretizer |
setInputCol(String value) |
QuantileDiscretizer |
setInputCols(String[] value) |
QuantileDiscretizer |
setNumBuckets(int value) |
QuantileDiscretizer |
setNumBucketsArray(int[] value) |
QuantileDiscretizer |
setOutputCol(String value) |
QuantileDiscretizer |
setOutputCols(String[] value) |
QuantileDiscretizer |
setRelativeError(double value) |
StructType |
transformSchema(StructType schema)
:: DeveloperApi ::
|
String |
uid()
An immutable unique ID for the object and its derivatives.
|
params
equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
getNumBuckets, getNumBucketsArray, getRelativeError
getHandleInvalid
getInputCol
getOutputCol
getInputCols
getOutputCols
clear, copyValues, defaultCopy, defaultParamMap, explainParam, explainParams, extractParamMap, extractParamMap, get, getDefault, getOrDefault, getParam, hasDefault, hasParam, isDefined, isSet, paramMap, params, set, set, set, setDefault, setDefault, shouldOwn
toString
write
save
initializeLogging, initializeLogIfNecessary, initializeLogIfNecessary, isTraceEnabled, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarning
public QuantileDiscretizer(String uid)
public QuantileDiscretizer()
public static QuantileDiscretizer load(String path)
public static MLReader<T> read()
public static org.slf4j.Logger org$apache$spark$internal$Logging$$log_()
public static void org$apache$spark$internal$Logging$$log__$eq(org.slf4j.Logger x$1)
public IntParam numBuckets()
QuantileDiscretizerBase
See also handleInvalid
, which can optionally create an additional bucket for NaN values.
default: 2
numBuckets
in interface QuantileDiscretizerBase
public IntArrayParam numBucketsArray()
QuantileDiscretizerBase
See also handleInvalid
, which can optionally create an additional bucket for NaN values.
numBucketsArray
in interface QuantileDiscretizerBase
public DoubleParam relativeError()
QuantileDiscretizerBase
org.apache.spark.sql.DataFrameStatFunctions.approxQuantile
for description)
Must be in the range [0, 1].
Note that in multiple columns case, relative error is applied to all columns.
default: 0.001relativeError
in interface QuantileDiscretizerBase
public Param<String> handleInvalid()
QuantileDiscretizerBase
handleInvalid
in interface QuantileDiscretizerBase
handleInvalid
in interface HasHandleInvalid
public final StringArrayParam outputCols()
HasOutputCols
outputCols
in interface HasOutputCols
public final StringArrayParam inputCols()
HasInputCols
inputCols
in interface HasInputCols
public final Param<String> outputCol()
HasOutputCol
outputCol
in interface HasOutputCol
public final Param<String> inputCol()
HasInputCol
inputCol
in interface HasInputCol
public String uid()
Identifiable
uid
in interface Identifiable
public QuantileDiscretizer setRelativeError(double value)
public QuantileDiscretizer setNumBuckets(int value)
public QuantileDiscretizer setInputCol(String value)
public QuantileDiscretizer setOutputCol(String value)
public QuantileDiscretizer setHandleInvalid(String value)
public QuantileDiscretizer setNumBucketsArray(int[] value)
public QuantileDiscretizer setInputCols(String[] value)
public QuantileDiscretizer setOutputCols(String[] value)
public StructType transformSchema(StructType schema)
PipelineStage
Check transform validity and derive the output schema from the input schema.
We check validity for interactions between parameters during transformSchema
and
raise an exception if any parameter value is invalid. Parameter value checks which
do not depend on other parameters are handled by Param.validate()
.
Typical implementation should first conduct verification on schema change and parameter validity, including complex parameter interaction checks.
transformSchema
in class PipelineStage
schema
- (undocumented)public Bucketizer fit(Dataset<?> dataset)
Estimator
fit
in class Estimator<Bucketizer>
dataset
- (undocumented)public QuantileDiscretizer copy(ParamMap extra)
Params
defaultCopy()
.copy
in interface Params
copy
in class Estimator<Bucketizer>
extra
- (undocumented)