CData Cloud offers access to Splunk across several standard services and protocols, in a cloud-hosted solution. Any application that can connect to a MySQL or SQL Server database can connect to Splunk through CData Cloud.
CData Cloud allows you to standardize and configure connections to Splunk as though it were any other OData endpoint, or standard SQL Server/MySQL database.
This page provides a guide to Establishing a Connection to Splunk in CData Cloud, as well as information on the available resources, and a reference to the available connection properties.
Establishing a Connection shows how to authenticate to Splunk and configure any necessary connection properties to create a database in CData Cloud
Accessing data from Splunk through the available standard services and CData Cloud administration is documented in further details in the CData Cloud Documentation.
Connect to Splunk by selecting the corresponding icon in the Database tab. Required properties are listed under Settings. The Advanced tab lists connection properties that are not typically required.
You must specify the URL to a valid Splunk server. By default the Cloud makes requests on port 8089.
By default, the Cloud attempts to negotiate TLS/SSL with the server.
There are two ways to authenticate to Splunk data: logging in with Splunk credentials, or using a Splunk authentication token.
To authenticate with Splunk credentials, set User and Password to your login credentials.
When you access Splunk via an authentication token, you can access the Splunk platform using Representational State Transfer (REST) calls. On Splunk Enterprise, you can also use the CLI. Both of these methods enable you to access the instance and make requests without having to authenticate via credentials.
Note: Unless you are accessing a search head cluster (where you can use the same token to access all available head clusters), you must have a separate token for each instance being accessed.
To authenticate with a Splunk token:
By default, the Cloud attempts to negotiate SSL/TLS by checking the server's certificate against the system's trusted certificate store.
To specify another certificate, see the SSLServerCert property for the available formats to do so.
To connect through the Windows system proxy, you do not need to set any additional connection properties. To connect to other proxies, set ProxyAutoDetect to false.
In addition, to authenticate to an HTTP proxy, set ProxyAuthScheme, ProxyUser, and ProxyPassword, in addition to ProxyServer and ProxyPort.
Set the following properties:
The Cloud models Splunk reports, searches, datasets, and data models as tables in a relational database that you can read from/write to with SQL-92 queries.
You can work with all of the tables in your account: when you connect the Cloud retrieves the metadata from Splunk and dynamically reflects any changes in the table schemas.
You can call the CreateSchema stored procedure to persist a static schema across connections. The stored procedure saves the schema to a text file; the text file has a simple format that also makes schemas easy to customize.
See Tables for more details on updating and querying datasets, data models, and searches.
The Cloud also surfaces data through Views representing the following Splunk objects:
The Cloud models the data in Splunk as a list of tables in a relational database that can be queried using standard SQL statements.
Name | Description |
DataModels | Create, query, update, and delete data models in Splunk. |
Datasets | Create, query, update, and delete datasets in Splunk. |
SearchJobs | Create, query, update, and delete search jobs in Splunk. |
Create, query, update, and delete data models in Splunk.
The Cloud will use the Splunk API to process search criteria that refer to the Id column. This column supports server-side processing for the = operator. The Cloud processes other filters client-side within the Cloud.
For example, the following query is processed server side by the Splunk APIs:
SELECT * FROM DataModels WHERE Id = 'SampleModel'
The Id column is the minimum requirement for an insert. In an insert, the DataModels table allows only the Id and Acceleration columns.
INSERT INTO DataModels (Id, Acceleration) VALUES ('initialname', '{"enabled":false,"earliest_time":"","hunk.file_format":"","hunk.dfs_block_size":0,"hunk.compression_codec":""}' )
The DataModels table allows updates for the Acceleration column when Id is specified. You can also set the Provisional pseudocolumn.
UPDATE DataModels SET Provisional = 'true', Acceleration = '{"enabled":false,"earliest_time": "-1mon", "cron_schedule": "0 */12 * * *","hunk.file_format":"","hunk.dfs_block_size":0,"hunk.compression_codec":""}' WHERE Id = 'initialname'
The DataModels table allows deleting a record when Id is specified.
DELETE FROM Datamodels WHERE Id = 'initialname'
Name | Type | ReadOnly | References | Description |
Id [KEY] | String | True |
Link of the data model. | |
Disabled | Boolean | True |
Indicates if the data model is disabled/enabled. | |
UpdatedAt | Datetime | True |
Datetime of the last update of the data model. | |
Description | String | True |
Description of the data model. | |
Name | String | False |
The name of the data model in Splunk. | |
DisplayName | String | True |
The name displayed for the data model in Splunk. | |
Author | String | True |
Splunk user who created the data model. | |
App | String | True |
Splunk app where the data model is shared. | |
Owner | String | True |
Splunk user who owns the data model. | |
CanShareApp | Boolean | True |
Boolean indicating whether the data model can be shared in an app. | |
CanShareGlobal | Boolean | True |
Boolean indicating whether the data model can be shared globally. | |
CanShareUser | Boolean | True |
Boolean indicating whether the data model can be shared by the user. | |
CanWrite | Boolean | True |
Boolean indicating whether the data model can be extended by the user. | |
Modifiable | Boolean | True |
Boolean indicating whether the data model can be modified. | |
Removable | Boolean | True |
Boolean indicating whether the data model can be removed. | |
Acceleration | String | False |
Acceleration settings for the data model. Supply JSON to specify any or all of the following settings: enabled (true or false), earliest_time (time modifier), or cron_schedule (cron string). | |
AccelerationAllowed | Boolean | True |
Boolean indicating that acceleration is allowed or not for the data model. | |
AccelerationHunkCompression | String | True |
Specifies the compression codec to be used for the accelerated orc or parquet format files. | |
DatasetCommands | String | True |
Data model commands. | |
DatasetDescription | String | True |
The JSON describing the data model. | |
DatasetCurrentCommand | Integer | True |
Current command of the data model. | |
DatasetEarliestTime | Datetime | True |
Earliest time of data model events being processed. | |
DatasetLatestTime | Datetime | True |
Latest time of data model events being processed. | |
DatasetDiversity | String | True |
Diversity of events being processed. | |
DatasetLimiting | Integer | True |
Limitations of events being processed. | |
DatasetMode | String | True |
Search mode events being processed. | |
DatasetSampleRatio | String | True |
Sample ratio of the data model. | |
DatasetFields | String | True |
Indexed fields the data model has. | |
DatasetType | String | True |
Dataset type. | |
Type | String | True |
Data model type. | |
Digest | String | True |
Content digest type. | |
TagsWhitelist | String | True |
Whitelist of data model tags. | |
ReadPermitions | String | True |
Permissions to read this data model. | |
WritePermitions | String | True |
Permissions to write to this data model. | |
Sharing | String | True |
Data model sharing type. | |
Username | String | True |
Username of the Splunk user. |
Pseudo column fields are used in the WHERE clause of SELECT statements and offer a more granular control over the tuples that are returned from the data source.
Name | Type | Description |
Provisional | Boolean |
Indicates whether the data model is provisional. Provisional data models are not saved. Specify true to validate a data model before saving it. |
Create, query, update, and delete datasets in Splunk.
The Datasets table requires DataModelId in the WHERE clause. The DataModelId column supports server-side processing for the = operator. The Cloud processes other search criteria client-side within the Cloud.
SELECT * FROM DataSets WHERE DataModelId = 'SampleModel'
Splunk allows inserts only when DataModelId, ParentName, and ObjectName are all specified.
INSERT INTO [Datasets] (ObjectName, ParentName, DataModelId) VALUES ('SampleSet', 'BaseEvent', 'SampleModel')
The Datasets table allows updates when DataModelId is specified. The columns that can be updated in this case are the following: Description and DisplayName.
When ObjectName is also specified, you can update the following columns: ObjectDisplayName, ParentName, Comment, Fields, Calculations, Constraints, Lineage, ObjectSearchNoFields, ObjectSearch, AutoextractSearch, PreviewSearch, AccelerationSearch, BaseSearch, and TsidxNamespace.
UPDATE Datasets SET Description = 'model description', DisplayName = 'Model Display Name' WHERE DataModelId = 'SampleModel' UPDATE Datasets SET ParentName = 'BaseEvent', BaseSearch = '| search (index=* OR index=_*) | fields _time, RootObject', AccelerationSearch = ' search (index=* OR index=_*) ' WHERE DataModelId = 'SampleModel' AND ObjectName = 'SampleSet'
Datasets can be deleted by providing the DataModelId and the ObjectName of the dataset.
DELETE FROM Datasets WHERE DataModelId = 'SampleModel' AND ObjectName = 'SampleSet'
Name | Type | ReadOnly | References | Description |
ObjectName [KEY] | String | False |
Name of the dataset object. | |
DatamodelId [KEY] | String | False |
DataModels.Id |
Id of the data model the object belongs to. |
DisplayName | String | False |
Name of the data model the object belongs to. | |
Description | String | False |
Dataset description. | |
ObjectNameList | String | True |
List of the objects in the data model. | |
ObjectDisplayName | String | False |
Name displayed in Splunk for the object. | |
ParentName | String | False |
Name of the Parent Event. | |
Comment | String | False |
Dataset comments. | |
Fields | String | False |
Dataset events indexed fields. | |
Calculations | String | False |
Saved calculations for dataset fields. | |
Constraints | String | False |
Saved constraints for dataset fields. | |
Lineage | String | False |
Dataset lineage. | |
ObjectSearchNoFields | String | False |
Object search query without fields. | |
ObjectSearch | String | False |
Saved search query for the object. | |
AutoextractSearch | String | False |
Search query for autoextraction. | |
PreviewSearch | String | False |
Search preview query. | |
AccelerationSearch | String | False |
Search query including acceleration. | |
BaseSearch | String | False |
Basic search query. | |
TsidxNamespace | String | False |
Allocated namespace. | |
EventBased | Integer | True |
Number of Event-Based objects in the data model. | |
TransactionBased | Integer | True |
Number of Transaction-Based objects in the data model. | |
SearchBased | Integer | True |
Number of Search-Based objects in the data model. |
Create, query, update, and delete search jobs in Splunk.
The Cloud will use the Splunk APIs to process the search Id (Sid) criteria specified in the WHERE clause. The Sid column supports server-side processing for the = operator. The Cloud processes other search criteria client-side within the Cloud.
SELECT * FROM SearchJobs SELECT * FROM SearchJobs WHERE Sid = '123456789.1234'
Splunk allows inserts only when EventSearch is specified. You can insert the Custom, EarliestTime, LatestTime, Label, and StatusBuckets columns and all pseudocolumns.
INSERT INTO SearchJobs (Custom, EventSearch, LatestTime, Timeout) VALUES ('custom1=test1, custom2=test2', ' from datamodel SampleModel', 'now', '60')
The SearchJobs table allows updates of the Custom column only when Sid is specified.
UPDATE SearchJobs SET Custom = 'custom1=test3, custom2=test4' WHERE sid = '123456789.1234'
SearchJobs can be deleted by providing the Sid.
DELETE FROM SearchJobs WHERE Sid = '123456789.1234'
Name | Type | ReadOnly | References | Description |
Sid [KEY] | String | False |
The search Id number. | |
EventSearch | String | False |
Subset of the entire search that is before any transforming commands. | |
Custom | String | False |
Custom job property. In an INSERT operation, pass the values as a comma-separated list of pairs of keys and values. | |
EarliestTime | String | False |
The earliest time a search job is configured to start. | |
LatestTime | String | False |
The latest time a search job is configured to start. | |
CursorTime | String | True |
The earliest time from which no events are later scanned. Can be used to indicate progress. | |
Delegate | String | True |
For saved searches, specifies jobs that were started by the user. Defaults to scheduler. | |
DiskUsage | Long | True |
The total amount of disk space used, in bytes. | |
DispatchState | String | True |
The state of the search. Can be any of QUEUED, PARSING, RUNNING, PAUSED, FINALIZING, FAILED, or DONE. | |
DoneProgress | Double | True |
A number between 0 and 1.0 that indicates the approximate progress of the search. doneProgress = (latestTime-cursorTime) / (latestTime-earliestTime) | |
DropCount | Integer | True |
For real-time searches only, the number of possible events that were dropped due to the rt_queue_size (defaults to 100000). | |
EventAvailableCount | Integer | True |
The number of events that are available for export. | |
EventCount | Integer | True |
The number of events returned by the search. | |
EventFieldCount | Integer | True |
The number of fields found in the search results. | |
EventIsStreaming | Boolean | True |
Indicates if the events of this search are being streamed. | |
EventIsTruncated | Boolean | True |
Indicates if the events of the search are not stored, making them unavailable from the events endpoint for the search. | |
EventPreviewableCount | Integer | True |
Number of in-memory events that are not yet committed to disk. | |
EventSorting | String | True |
Indicates if the events of this search are sorted, and in which order. | |
IsDone | Boolean | True |
Indicates if the search has completed. | |
IsEventsPreviewEnabled | String | True |
Indicates if the timeline_events_preview setting is enabled in limits.conf. | |
IsFailed | Boolean | True |
Indicates if there was a fatal error executing the search. For example, invalid search string syntax. | |
IsFinalized | Boolean | True |
Indicates if the search was finalized (stopped before completion). | |
IsPaused | Boolean | True |
Indicates if the search is paused. | |
IsPreviewEnabled | Boolean | True |
Indicates if previews are enabled. | |
IsRealTimeSearch | Boolean | True |
Indicates if the search is a real-time search. | |
IsRemoteTimeline | Boolean | True |
Indicates if the remote timeline feature is enabled. | |
IsSaved | Boolean | True |
Indicates that the search job is saved on disk. Search artifacts are saved on disk for 7 days from the last time that the job was viewed or touched. | |
IsSavedSearch | Boolean | True |
Indicates if this is a saved search run using the scheduler. | |
IsZombie | Boolean | True |
Indicates if the process running the search died without finishing the search. | |
Keywords | String | True |
All positive keywords used by this search. A positive keyword is a keyword that is not in a NOT clause. | |
Label | String | False |
Custom name created for this search. | |
Messages | String | True |
Errors and debug messages. | |
NumPreviews | Integer | True |
Number of previews generated so far for this search job. | |
Performance | String | True |
A representation of the execution costs. | |
Priority | Integer | True |
An integer between 0-10 that indicates the search priority. | |
RemoteSearch | String | True |
The search string that is sent to every search peer. | |
ReportSearch | String | True |
If reporting commands are used, the reporting search. | |
ResultCount | Integer | True |
The total number of results returned by the search. In other words, this is the subset of scanned events (represented by the ScanCount) that actually matches the search terms. | |
ResultIsStreaming | Boolean | True |
Indicates if the final results of the search are available using streaming (for example, no transforming operations). | |
ResultPreviewCount | Integer | True |
The number of result rows in the latest preview results. | |
RunDuration | Decimal | True |
Time in seconds that the search took to complete. | |
ScanCount | Integer | True |
The number of events that are scanned or read off disk. | |
SearchEarliestTime | Datetime | True |
Specifies the earliest time for a search, as specified in the search command rather than the EarliestTime parameter. It does not snap to the indexed data time bounds for all-time searches. | |
SearchLatestTime | Datetime | True |
Specifies the latest time for a search, as specified in the search command rather than the LatestTime parameter. It does not snap to the indexed data time bounds for all-time searches. | |
SearchProviders | String | True |
A list of all the search peers that were contacted. | |
StatusBuckets | Integer | False |
Maximum number of timeline buckets. | |
TTL | String | True |
The time to live, or the time before the search job expires after it completes. |
Pseudo column fields are used in the WHERE clause of SELECT statements and offer a more granular control over the tuples that are returned from the data source.
Name | Type | Description |
SearchMode | String |
Searching mode, realtime or normal. If set to realtime, the search runs over the live data. The allowed values are normal, realtime. |
EnableLookups | Boolean |
Indicates whether lookups should be applied to events. |
AutoPause | Integer |
If specified, the search job pauses after this many seconds of inactivity. (0 means never autopause.) |
AutoCancel | Integer |
If specified, the job automatically cancels after this many seconds of inactivity. (0 means never autocancel.) |
AdhocSearchLevel | Integer |
Specify a search mode. Use one of the following search modes: verbose, fast, or smart. The allowed values are verbose, fast, smart. |
ForceBundleReplication | Boolean |
Specifies whether this search should cause (and wait depending on the value of SyncBundleReplication) for bundle synchronization with all search peers. |
IndexEarliest | String |
Specify a time string. Sets the earliest inclusive time bounds for the search, based on the index time bounds. |
IndexLatest | String |
Specify a time string. Sets the latest exclusive time bounds for the search, based on the index time bounds. |
IndexedRealtime | Boolean |
Indicates whether or not to use the indexed-realtime mode for real-time searches. |
IndexedRealtimeOffset | Integer |
Sets disk sync delay for indexed real-time search (seconds). |
MaxCount | Integer |
The number of events that can be accessible in any given status bucket. |
MaxTime | Integer |
Comma-separated list of (possibly wildcarded) servers from which raw events should be pulled. |
Namespace | String |
The application namespace in which to restrict searches. |
Now | String |
Specify a time string to set the absolute time used for any relative time specifier in the search. Defaults to the current system time. You can specify a relative time modifier for this parameter. For example, specify +2d to specify the current time plus two days. |
ReduceFrequency | Integer |
Determines how frequently to run the MapReduce reduce phase on accumulated map values. |
ReloadMacros | Boolean |
Specifies whether to reload macro definitions from the configuration file. |
RemoteServerList | Integer |
The number of seconds to run this search before finalizing. Specify 0 to never finalize. |
ReplaySpeed | Integer |
Indicate a real-time search replay speed factor. For example, 1 indicates normal speed, 0.5 indicates half of normal speed, and 2 indicates twice as fast as normal. |
ReplayStartTime | String |
Relative wall-clock start time for the replay. |
ReplayEndTime | String |
Relative end time for the replay clock. The replay stops when the clock time reaches this time. |
ReuseMaxSecondsAgo | Integer |
Specifies the number of seconds ago to check when an identical search is started and return the search Id of the job instead of starting a new job. |
RequiredField | String |
Adds a required field to the search. |
RealTimeBlocking | Boolean |
For a real-time search, indicates if the indexer blocks if the queue for this search is full. |
RealTimeIndexFilter | Boolean |
For a real-time search, indicates if the indexer prefilters events. |
RealTimeMaxBlockSecs | Integer |
For a real-time search with RealTimeBlocking set to true, the maximum time to block. Specify 0 to indicate no limit. |
RealTimeQueueSize | Integer |
For a real-time search, the queue size (in events) that the indexer should use for this search. |
Timeout | Integer |
The number of seconds to keep this search after processing has stopped. |
SyncBundleReplication | String |
Specifies whether this search should wait for bundle replication to complete. |
Views are similar to tables in the way that data is represented; however, views are read-only.
Queries can be executed against a view as if it were a normal table.
Name | Description |
AlertsInInternalServer | A dataset object in the example InternalServer data model. |
LookUpReport | An example lookup report representing a view based on a saved report in Splunk. |
UploadedModel | An example of a table object inside a data model. |
A dataset object in the example InternalServer data model.
This is an example of a dataset view. These views are generated from dataset objects inside a data model. The Cloud will use the Splunk APIs to process the following query components; the Cloud processes other parts of the query client-side in memory.
All columns support server-side processing for the following operators and functions:
LIMIT, ORDER BY, GROUP BY, and HAVING are also processed server-side. An exception is the case when in the selected columns, there are fields that are not in the GROUP BY, and GROUP BY, criteria, and limiting are handled client-side.
In the case when an unsupported criteria or function is used, all processing will be completed client-side (except selecting specified fields). This is also the case when a SELECT statement has a column that is not in the GroupBy clause.
For example, the Cloud uses the Splunk APIs to process the following queries.
SELECT Component, Timeendpos as Timeend FROM [AlertsInInternalServer] WHERE Component = 'Saved' OR EventType != '' AND Priority IS NOT NULL AND Linecount NOT IN ('1', '2') ORDER BY Priority DESC LIMIT 5 SELECT AVG(Suppressed), Priority FROM [AlertsInInternalServer] GROUP BY Priority HAVING AVG(Suppressed) > 0
Name | Type | Description |
_time | Datetime | |
component | String | |
date_hour | Int | |
date_mday | Int | |
date_minute | Int | |
date_month | String | |
date_second | Int | |
date_wday | String | |
date_year | Int | |
date_zone | Int | |
digest_mode | Int | |
dispatch_time | Int | |
host | String | |
linecount | Int | |
log_level | String | |
priority | String | |
punct | String | |
savedsearch_id | String | |
scheduled_time | Int | |
search_type | String | |
server_alert_actions | String | |
server_app | String | |
server_message | String | |
server_result_count | Int | |
server_run_time | Double | |
server_savedsearch_name | String | |
server_sid | String | |
server_status | String | |
server_user | String | |
source | String | |
sourcetype | String | |
splunk_server | String | |
suppressed | Int | |
thread_id | String | |
timeendpos | Int | |
timestartpos | Int | |
window_time | Int |
An example lookup report representing a view based on a saved report in Splunk.
This is an example of a report view. These views are generated from saved reports in Splunk.
The Cloud will use the Splunk APIs to process the following query components; the Cloud processes other parts of the query client-side in memory.
Runs a saved search, or report, and returns the search results of a saved search. If the search contains replacement placeholder terms, such as $replace_me$, the search processor replaces the placeholders with the strings you specify.
For example:
Will generate the following search statement:
All replacement placeholder terms will be dynamic and saved as Pseudo-Columns.
All columns support server-side processing for the following operators and functions:
LIMIT, ORDER BY, GROUP BY, and HAVING are also processed server-side. An exception is the case when in the selected columns, there are fields that are not in the GROUP BY, and GROUP BY, criteria, and limiting are handled client-side.
In the case when an unsupported criteria or function is used, all processing will be completed client-side (except selecting specified fields). This is also the case when a SELECT statement has a column that is not in the GROUP BY clause.
For example, the Cloud processes the following queries server-side:
SELECT Country, Subregion as Sub FROM LookUpReport WHERE Iso2 != '123' OR continent = 'Europe' AND iso3 NOT IN ('example_1', 'example_2') ORDER BY Country DESC LIMIT 5 SELECT AVG(Iso2), Subregion FROM LookUpReport GROUP BY Subregion HAVING AVG(Iso2) > 0
Name | Type | Description |
continent | String | |
country | String | |
iso2 | String | |
iso3 | String | |
region_un | String | |
region_wb | String | |
subregion | String |
An example of a table object inside a data model.
This is an example of a view generated from a table object inside a data model. The Cloud will use the Splunk APIs to process the following query components; the Cloud processes other parts of the query client-side in memory.
All columns support server-side processing for the following operators and functions.
LIMIT, ORDER BY, GROUP BY, and HAVING are also processed server-side. An exception is the case when in the selected columns, there are fields that are not in the GROUP BY, and GROUP BY, criteria, and limiting are handled client-side.
In the case when an unsupported criteria or function is used, all processing will be completed client-side (except selecting specified fields). This is also the case when a SELECT statement has a column that is not in the GROUP BY clause.
For example, the following queries are processed server side:
SELECT Component, Timeendpos as Timeend FROM [UploadedModel] WHERE Component = 'Saved' OR DEST_CITY_MARKET_ID != '' AND DEST_AIRPORT_ID NOT IN ('1', '2') ORDER BY ORIGIN_AIRPORT_ID DESC LIMIT 5 SELECT AVG(DEST_AIRPORT_ID), ORIGIN_AIRPORT_ID FROM [UploadedModel] GROUP BY ORIGIN_AIRPORT_ID HAVING AVG(DEST_AIRPORT_ID) > 0
Name | Type | Description |
_time | Datetime | |
DEST_AIRPORT_ID | Int | |
DEST_AIRPORT_SEQ_ID | Int | |
DEST_CITY_MARKET_ID | Int | |
host | String | |
linecount | Int | |
ORIGIN_AIRPORT_ID | Int | |
ORIGIN_AIRPORT_SEQ_ID | Int | |
ORIGIN_CITY_MARKET_ID | Int | |
punct | String | |
source | String | |
sourcetype | String | |
splunk_server | String | |
timestamp | String |
Stored procedures are function-like interfaces that extend the functionality of the Cloud beyond simple SELECT/INSERT/UPDATE/DELETE operations with Splunk.
Stored procedures accept a list of parameters, perform their intended function, and then return any relevant response data from Splunk, along with an indication of whether the procedure succeeded or failed.
Name | Description |
CreateHTTPEvent | The HTTP Event Collector (HEC) lets you send data and application events to a Splunk deployment over the HTTP and Secure HTTP (HTTPS) protocols. |
CreateIndex | Create a data index. |
CreateSavedSearch | Create a saved search. |
DeleteIndex | Delete a data index. |
UpdateIndex | Update a data index. |
UpdateSavedSearch | Update a saved search. |
The HTTP Event Collector (HEC) lets you send data and application events to a Splunk deployment over the HTTP and Secure HTTP (HTTPS) protocols.
Name | Type | Required | Description |
EventContent | String | True | The name of the table or view. |
ContentType | String | False | The type of the content specified on the EventContent input. Allowed values: JSON, RAWTEXT. |
ChannelGUID | String | False | The GUID of the channel used for the event. This is requeired in case of ContentType=RAWTEXT. |
Name | Type | Description |
Success | String | Returns the success status of the created event. |
Create a data index.
Name | Type | Required | Description |
Name | String | True | The name of the index to create. |
BlockSignSize | String | False | Controls how many events make up a block for block signatures. If this is set to 0, block signing is disabled for this index. A recommended value is 100. |
BucketRebuildMemoryHint | String | False | Suggestion for the bucket rebuild process for the size of the time-series (tsidx) file to make. Default value, auto, varies by the amount of physical RAM on the host. |
ColdPath | String | False | An absolute path that contains the colddbs for the index. The path must be readable and writable. |
ColdToFrozenDir | String | False | Destination path for the frozen archive. Use as an alternative to a ColdToFrozenScript. |
ColdToFrozenScript | String | False | Path to the archiving script. |
DataType | String | False | Specifies the type of index. |
EnableOnlineBucketRepair | String | False | Enables asynchronous 'online fsck' bucket repair, which runs concurrently with splunk software. When enabled, you do not have to wait until buckets are repaired to start the splunk platform. |
FrozenTimePeriodInSecs | String | False | Number of seconds after which indexed data rolls to frozen. Defaults to 188697600 (6 years). |
HomePath | String | False | An absolute path that contains the hot and warm buckets for the index. |
MaxBloomBackfillBucketAge | String | False | Valid values are: integer[m|s|h|d] if a warm or cold bucket is older than the specified age, do not create or rebuild its bloomfilter. Specify 0 to never rebuild bloomfilters. |
MaxConcurrentOptimizes | String | False | The number of concurrent optimize processes that can run against a hot bucket. |
MaxDataSize | String | False | The maximum size in MB for a hot DB to reach before a roll to warm is triggered. Specifying 'auto' or 'auto_high_volume' causes Splunk software to autotune this parameter (recommended). |
MaxHotBuckets | String | False | Maximum hot buckets that can exist per index. |
MaxHotIdleSecs | String | False | Maximum life, in seconds, of a hot bucket. |
MaxHotSpanSecs | String | False | Upper bound of target maximum timespan of hot/warm buckets in seconds. |
MaxMemMB | String | False | The amount of memory, expressed in MB, to allocate for buffering a single tsidx file into memory before flushing to disk. |
MaxMetaEntries | String | False | Sets the maximum number of unique lines in . data files in a bucket, which may help to reduce memory consumption. |
MaxTimeUnreplicatedNoAcks | String | False | Upper limit, in seconds, on how long an event can sit in raw slice. Applies only if replication is enabled for this index. |
MaxTimeUnreplicatedWithAcks | String | False | Upper limit, in seconds, on how long events can sit unacknowledged in a raw slice. Applies only if you have enabled acks on forwarders and have replication enabled (with clustering). |
MaxTotalDataSizeMB | String | False | The maximum size of an index (in MB). If an index grows larger than the maximum size, the oldest data is frozen. |
MaxWarmDBCount | String | False | The maximum number of warm buckets. If this number is exceeded, the warm bucket/s with the lowest value for their latest times is moved to cold. |
MinRawFileSyncSecs | String | False | Specify an integer (or 'disable') for this parameter. This parameter sets how frequently splunkd forces a filesystem sync while compressing journal slices. |
MinStreamGroupQueueSize | String | False | Minimum size of the queue that stores events in memory before committing them to a tsidx file. |
PartialServiceMetaPeriod | String | False | Related to serviceMetaPeriod. If set, it enables metadata sync every SPECIFIED seconds, but only for records where the sync can be done efficiently in-place, without requiring a full re-write of the metadata file. |
ProcessTrackerServiceInterval | String | False | Specifies, in seconds, how often the indexer checks the status of the child OS processes it launched to see if it can launch new processes for queued requests. If set to 0, the indexer checks child process status every second. |
QuarantineFutureSecs | String | False | Events with timestamp of QuarantineFutureSecs newer than 'now' are dropped into quarantine bucket. |
QuarantinePastSecs | String | False | Events with timestamp of quarantinePastSecs older than 'now' are dropped into quarantine bucket. |
RawChunkSizeBytes | String | False | Target uncompressed size in bytes for individual raw slice in the rawdata journal of the index. 0 is not a valid value. If 0 is specified, rawChunkSizeBytes is set to the default value. |
RepFactor | String | False | Index replication control. This parameter applies to only clustering slaves. auto = Use the master index replication configuration value. 0 = Turn off replication for this index. |
RotatePeriodInSecs | String | False | How frequently (in seconds) to check if a new hot bucket needs to be created. Also, how frequently to check if there are any warm/cold buckets that should be rolled/frozen. |
ServiceMetaPeriod | String | False | Defines how frequently metadata is synced to disk, in seconds. |
SyncMeta | String | False | When true, a sync operation is called before file descriptor is closed on metadata file updates. This functionality improves integrity of metadata files, especially in regards to operating system crashes/machine failures. |
ThawedPath | String | False | An absolute path that contains the thawed (resurrected) databases for the index. Cannot be defined in terms of a volume definition. |
ThrottleCheckPeriod | String | False | Defines how frequently Splunk software checks for index throttling condition, in seconds. |
TstatsHomePath | String | False | Location to store datamodel acceleration TSIDX data for this index. If specified, it must be defined in terms of a volume definition. Path must be writable. |
WarmToColdScript | String | False | Path to a script to run when moving data from warm to cold. |
Name | Type | Description |
AssureUTF8 | Boolean | Boolean value indicating wheter all data retreived from the index is proper UTF8. |
BlockSignSize | Integer | Controls how many events make up a block for block signatures. If this is set to 0, block signing is disabled for this index. A recommended value is 100. |
BlockSignatureDatabase | String | The index that stores block signatures of events. This is a global setting, not a per index setting. |
BucketRebuildMemoryHint | String | Suggestion for the bucket rebuild process for the size of the time-series (tsidx) file to make. |
ColdPath | String | Filepath to the cold databases for the index. |
ColdPathExpanded | String | Absoute filepath to the cold databases. |
ColdToFrozenDir | String | Destination path for the frozen archive. Use as an alternative to a ColdToFrozenScript. |
ColdToFrozenScript | String | Path to the archiving script. |
CurrentDBSizeMB | Integer | Total size, in MB, of data stored in the index. The total incudes data in the home, cold and thawed paths. |
DataType | String | The type of index. |
DefaultDatabase | String | If no index destination information is available in the input data, the index shown here is the destination of such data. |
EnableOnlineBucketRepair | Boolean | Enables asynchronous 'online fsck' bucket repair, which runs concurrently with splunk software. When enabled, you do not have to wait until buckets are repaired to start the splunk platform. |
FrozenTimePeriodInSecs | Integer | Number of seconds after which indexed data rolls to frozen. |
HomePath | String | An absolute path that contains the hot and warm buckets for the index. |
HomePathExpanded | String | An absolute filepath to the hot and warm buckets for the index. |
IndexThreads | String | Number of threads used for indexing. This is a global setting, not a per index setting. |
IsInternal | Boolean | Indicates if this is an internal index. |
IsReady | Boolean | Indicates if an index is properly initialized. |
LastInitTime | Datetime | Last time the index processor was successfully initialized. This is a global setting, not a per index setting. |
MaxBloomBackfillBucketAge | String | If a bucket (warm or cold) is older than this, Splunk software does not create (or re-create) its bloom filter. |
MaxConcurrentOptimizes | Integer | The number of concurrent optimize processes that can run against a hot bucket. |
MaxDataSize | String | The maximum size in MB for a hot DB to reach before a roll to warm is triggered. Specifying 'auto' or 'auto_high_volume' causes Splunk software to autotune this parameter (recommended). |
MaxHotBuckets | String | Maximum hot buckets that can exist per index. |
MaxHotIdleSecs | Integer | Maximum life, in seconds, of a hot bucket. |
MaxHotSpanSecs | Integer | Upper bound of target maximum timespan of hot/warm buckets in seconds. |
MaxMemMB | Integer | The amount of memory, expressed in MB, to allocate for buffering a single tsidx file into memory before flushing to disk. |
MaxMetaEntries | Integer | Sets the maximum number of unique lines in . data files in a bucket, which may help to reduce memory consumption. |
MaxTime | Datetime | ISO8601 timestamp ofThis setting is valid only when tsidxWritingLevel is at 4 or higher. This maximum term limit sets an upper bound on the number of terms kept inside an in-memory hash table that serves to improve tsidx compression. |
MaxTimeUnreplicatedNoAcks | Integer | Upper limit, in seconds, on how long an event can sit in raw slice. Applies only if replication is enabled for this index. |
MaxTimeUnreplicatedWithAcks | Integer | Upper limit, in seconds, on how long events can sit unacknowledged in a raw slice. Applies only if you have enabled acks on forwarders and have replication enabled (with clustering). |
MaxTotalDataSizeMB | Integer | The maximum size of an index (in MB). If an index grows larger than the maximum size, the oldest data is frozen. |
MaxWarmDBCount | String | The maximum number of warm buckets. If this number is exceeded, the warm bucket/s with the lowest value for their latest times is moved to cold. |
MemPoolMB | String | Determines how much memory is given to the indexer memory pool. This is a global setting, not a per-index setting. |
MinRawFileSyncSecs | String | Specify an integer (or 'disable') for this parameter. This parameter sets how frequently splunkd forces a filesystem sync while compressing journal slices. |
MinStreamGroupQueueSize | Integer | Minimum size of the queue that stores events in memory before committing them to a tsidx file. |
MinTime | Datetime | ISO8601 timestamp of the oldest event time in the index. |
PartialServiceMetaPeriod | Integer | Related to serviceMetaPeriod. If set, it enables metadata sync every SPECIFIED seconds, but only for records where the sync can be done efficiently in-place, without requiring a full re-write of the metadata file. |
ProcessTrackerServiceInterval | Integer | Specifies, in seconds, how often the indexer checks the status of the child OS processes it launched to see if it can launch new processes for queued requests. If set to 0, the indexer checks child process status every second. |
QuarantineFutureSecs | Integer | Events with timestamp of QuarantineFutureSecs newer than 'now' are dropped into quarantine bucket. |
QuarantinePastSecs | Integer | Events with timestamp of quarantinePastSecs older than 'now' are dropped into quarantine bucket. |
RawChunkSizeBytes | Integer | Target uncompressed size in bytes for individual raw slice in the rawdata journal of the index. 0 is not a valid value. If 0 is specified, rawChunkSizeBytes is set to the default value. |
RepFactor | String | Index replication control. This parameter applies to only clustering slaves. auto = Use the master index replication configuration value. 0 = Turn off replication for this index. |
RotatePeriodInSecs | Integer | How frequently (in seconds) to check if a new hot bucket needs to be created. Also, how frequently to check if there are any warm/cold buckets that should be rolled/frozen. |
ServiceMetaPeriod | Integer | Defines how frequently metadata is synced to disk, in seconds. |
SuppressBannerList | String | List of indexes for which we suppress 'index missing' warning banner messages. This is a global setting, not a per index setting. |
Sync | String | Specifies the number of events that trigger the indexer to sync events. This is a global setting, not a per index setting. |
SyncMeta | Boolean | When true, a sync operation is called before file descriptor is closed on metadata file updates. This functionality improves integrity of metadata files, especially in regards to operating system crashes/machine failures. |
ThawedPath | String | An absolute path that contains the thawed (resurrected) databases for the index. Cannot be defined in terms of a volume definition. |
ThawedPathExpanded | String | Absolute filepath to the thawed (resurrected) databases. |
ThrottleCheckPeriod | Integer | Defines how frequently Splunk software checks for index throttling condition, in seconds. |
TotalEventCount | Integer | Total number of events in the index. |
TsidxDedupPostingsListMaxTermsLimit | Integer | This setting is valid only when tsidxWritingLevel is at 4 or higher. This maximum term limit sets an upper bound on the number of terms kept inside an in-memory hash table that serves to improve tsidx compression. |
TstatsHomePath | String | Location to store datamodel acceleration TSIDX data for this index. If specified, it must be defined in terms of a volume definition. Path must be writable. |
WarmToColdScript | String | Path to a script to run when moving data from warm to cold. |
Success | Boolean | Boolean indicating whether the stored procedure was executed successfully. |
ErrorCode | Integer | The error code in case the procedure is not executed successfully. |
ErrorMessage | String | The error message in case the procedure is not executed successfully. |
Create a saved search.
Name | Type | Required | Description |
Name | String | True | A name for the search |
Search | String | True | The search query to save |
Description | String | False | Description of this saved search. |
CronSchedule | String | False | The cron schedule to execute this search. For example: */5 * * * * causes the search to execute every 5 minutes. |
Disabled | Boolean | False | Indicates if this saved search is disabled. |
IsScheduled | Boolean | False | Indicates if this search is to be run on a schedule. |
IsVisible | Boolean | False | Indicates if this saved search appears in the visible saved search list. |
RealTimeSchedule | Boolean | False | If this value is set to 1, the scheduler bases its determination of the next scheduled search execution time on the current time. If this value is set to 0, it is determined based on the last search execution time. |
RunOnStartup | Boolean | False | Indicates whether this search runs on startup. If it does not run on startup, it runs at the next scheduled time. |
SchedulePriority | String | False | Indicates the scheduling priority of a specific search.
The allowed values are default, higher, highest. |
UserContext | String | False | If user context is provided, servicesNS node will be used (/servicesNS/[UserContext]/search), otherwise it defaults to the general endpoint /services. |
Name | Type | Description |
Success | Boolean | Returns the success status of the created saved search. |
Message | String | Warnings from the server. |
Delete a data index.
Name | Type | Required | Description |
Name | String | True | The name of the index to delete. |
Name | Type | Description |
Success | Boolean | Boolean indicating whether the stored procedure was executed successfully. |
ErrorCode | Integer | The error code in case the procedure is not executed successfully. |
ErrorMessage | String | The error code in case the procedure is not executed successfully. |
Update a data index.
Name | Type | Required | Description |
Name | String | True | The name of the index to update. |
BlockSignSize | String | False | Controls how many events make up a block for block signatures. If this is set to 0, block signing is disabled for this index. A recommended value is 100. |
BucketRebuildMemoryHint | String | False | Suggestion for the bucket rebuild process for the size of the time-series (tsidx) file to make. Default value, auto, varies by the amount of physical RAM on the host. |
ColdToFrozenDir | String | False | Destination path for the frozen archive. Use as an alternative to a ColdToFrozenScript. |
ColdToFrozenScript | String | False | Path to the archiving script. |
EnableOnlineBucketRepair | String | False | Enables asynchronous 'online fsck' bucket repair, which runs concurrently with splunk software. When enabled, you do not have to wait until buckets are repaired to start the splunk platform. |
FrozenTimePeriodInSecs | String | False | Number of seconds after which indexed data rolls to frozen. Defaults to 188697600 (6 years). |
MaxBloomBackfillBucketAge | String | False | Valid values are: integer[m|s|h|d] if a warm or cold bucket is older than the specified age, do not create or rebuild its bloomfilter. Specify 0 to never rebuild bloomfilters. |
MaxConcurrentOptimizes | String | False | The number of concurrent optimize processes that can run against a hot bucket. |
MaxDataSize | String | False | The maximum size in MB for a hot DB to reach before a roll to warm is triggered. Specifying 'auto' or 'auto_high_volume' causes Splunk software to autotune this parameter (recommended). |
MaxHotBuckets | String | False | Maximum hot buckets that can exist per index. |
MaxHotIdleSecs | String | False | Maximum life, in seconds, of a hot bucket. |
MaxHotSpanSecs | String | False | Upper bound of target maximum timespan of hot/warm buckets in seconds. |
MaxMemMB | String | False | The amount of memory, expressed in MB, to allocate for buffering a single tsidx file into memory before flushing to disk. |
MaxMetaEntries | String | False | Sets the maximum number of unique lines in . data files in a bucket, which may help to reduce memory consumption. |
MaxTimeUnreplicatedNoAcks | String | False | Upper limit, in seconds, on how long an event can sit in raw slice. Applies only if replication is enabled for this index. |
MaxTimeUnreplicatedWithAcks | String | False | Upper limit, in seconds, on how long events can sit unacknowledged in a raw slice. Applies only if you have enabled acks on forwarders and have replication enabled (with clustering). |
MaxTotalDataSizeMB | String | False | The maximum size of an index (in MB). If an index grows larger than the maximum size, the oldest data is frozen. |
MaxWarmDBCount | String | False | The maximum number of warm buckets. If this number is exceeded, the warm bucket/s with the lowest value for their latest times is moved to cold. |
MinRawFileSyncSecs | String | False | Specify an integer (or 'disable') for this parameter. This parameter sets how frequently splunkd forces a filesystem sync while compressing journal slices. |
MinStreamGroupQueueSize | String | False | Minimum size of the queue that stores events in memory before committing them to a tsidx file. |
PartialServiceMetaPeriod | String | False | Related to serviceMetaPeriod. If set, it enables metadata sync every n seconds, but only for records where the sync can be done efficiently in-place, without requiring a full re-write of the metadata file. |
ProcessTrackerServiceInterval | String | False | Specifies, in seconds, how often the indexer checks the status of the child OS processes it launched to see if it can launch new processes for queued requests. If set to 0, the indexer checks child process status every second. |
QuarantineFutureSecs | String | False | Events with timestamp of QuarantineFutureSecs newer than 'now' are dropped into quarantine bucket. |
QuarantinePastSecs | String | False | Events with timestamp of quarantinePastSecs older than 'now' are dropped into quarantine bucket. |
RawChunkSizeBytes | String | False | Target uncompressed size in bytes for individual raw slice in the rawdata journal of the index. 0 is not a valid value. If 0 is specified, rawChunkSizeBytes is set to the default value. |
RepFactor | String | False | Index replication control. This parameter applies to only clustering slaves. auto = Use the master index replication configuration value. 0 = Turn off replication for this index. |
RotatePeriodInSecs | String | False | How frequently (in seconds) to check if a new hot bucket needs to be created. Also, how frequently to check if there are any warm/cold buckets that should be rolled/frozen. |
ServiceMetaPeriod | String | False | Defines how frequently metadata is synced to disk, in seconds. |
SyncMeta | String | False | When true, a sync operation is called before file descriptor is closed on metadata file updates. This functionality improves integrity of metadata files, especially in regards to operating system crashes/machine failures. |
ThrottleCheckPeriod | String | False | Defines how frequently Splunk software checks for index throttling condition, in seconds. |
TstatsHomePath | String | False | Location to store datamodel acceleration TSIDX data for this index. If specified, it must be defined in terms of a volume definition. Path must be writable. |
WarmToColdScript | String | False | Path to a script to run when moving data from warm to cold. |
Name | Type | Description |
AssureUTF8 | Boolean | Boolean value indicating wheter all data retreived from the index is proper UTF8. |
BlockSignSize | Integer | Controls how many events make up a block for block signatures. If this is set to 0, block signing is disabled for this index. A recommended value is 100. |
BlockSignatureDatabase | String | The index that stores block signatures of events. This is a global setting, not a per index setting. |
BucketRebuildMemoryHint | String | Suggestion for the bucket rebuild process for the size of the time-series (tsidx) file to make. |
ColdPath | String | Filepath to the cold databases for the index. |
ColdPathExpanded | String | Absoute filepath to the cold databases. |
ColdToFrozenDir | String | Destination path for the frozen archive. Use as an alternative to a ColdToFrozenScript. |
ColdToFrozenScript | String | Path to the archiving script. |
CurrentDBSizeMB | Integer | Total size, in MB, of data stored in the index. The total incudes data in the home, cold and thawed paths. |
DefaultDatabase | String | If no index destination information is available in the input data, the index shown here is the destination of such data. |
EnableOnlineBucketRepair | Boolean | Enables asynchronous 'online fsck' bucket repair, which runs concurrently with splunk software. When enabled, you do not have to wait until buckets are repaired to start the splunk platform. |
EnableRealtimeSearch | Boolean | Indicates if this is a real-time search. This is a global setting, not a per index setting. |
FrozenTimePeriodInSecs | Integer | Number of seconds after which indexed data rolls to frozen. |
HomePath | String | An absolute path that contains the hot and warm buckets for the index. |
HomePathExpanded | String | An absolute filepath to the hot and warm buckets for the index. |
IndexThreads | String | Number of threads used for indexing. This is a global setting, not a per index setting. |
IsInternal | Boolean | Indicates if this is an internal index. |
LastInitTime | Datetime | Last time the index processor was successfully initialized. This is a global setting, not a per index setting. |
MaxBloomBackfillBucketAge | String | If a bucket (warm or cold) is older than this, Splunk software does not create (or re-create) its bloom filter. |
MaxConcurrentOptimizes | Integer | The number of concurrent optimize processes that can run against a hot bucket. |
MaxDataSize | String | The maximum size in MB for a hot DB to reach before a roll to warm is triggered. Specifying 'auto' or 'auto_high_volume' causes Splunk software to autotune this parameter (recommended). |
MaxHotBuckets | String | Maximum hot buckets that can exist per index. |
MaxHotIdleSecs | Integer | Maximum life, in seconds, of a hot bucket. |
MaxHotSpanSecs | Integer | Upper bound of target maximum timespan of hot/warm buckets in seconds. |
MaxMemMB | Integer | The amount of memory, expressed in MB, to allocate for buffering a single tsidx file into memory before flushing to disk. |
MaxMetaEntries | Integer | Sets the maximum number of unique lines in . data files in a bucket, which may help to reduce memory consumption. |
MaxTime | Datetime | ISO8601 timestamp ofThis setting is valid only when tsidxWritingLevel is at 4 or higher. This maximum term limit sets an upper bound on the number of terms kept inside an in-memory hash table that serves to improve tsidx compression. |
MaxTimeUnreplicatedNoAcks | Integer | Upper limit, in seconds, on how long an event can sit in raw slice. Applies only if replication is enabled for this index. |
MaxTimeUnreplicatedWithAcks | Integer | Upper limit, in seconds, on how long events can sit unacknowledged in a raw slice. Applies only if you have enabled acks on forwarders and have replication enabled (with clustering). |
MaxTotalDataSizeMB | Integer | The maximum size of an index (in MB). If an index grows larger than the maximum size, the oldest data is frozen. |
MaxWarmDBCount | String | The maximum number of warm buckets. If this number is exceeded, the warm bucket/s with the lowest value for their latest times is moved to cold. |
MemPoolMB | String | Determines how much memory is given to the indexer memory pool. This is a global setting, not a per-index setting. |
MinRawFileSyncSecs | String | Specify an integer (or 'disable') for this parameter. This parameter sets how frequently splunkd forces a filesystem sync while compressing journal slices. |
MinStreamGroupQueueSize | Integer | Minimum size of the queue that stores events in memory before committing them to a tsidx file. |
MinTime | Datetime | ISO8601 timestamp of the oldest event time in the index. |
PartialServiceMetaPeriod | Integer | Related to serviceMetaPeriod. If set, it enables metadata sync every SPECIFIED seconds, but only for records where the sync can be done efficiently in-place, without requiring a full re-write of the metadata file. |
ProcessTrackerServiceInterval | Integer | Specifies, in seconds, how often the indexer checks the status of the child OS processes it launched to see if it can launch new processes for queued requests. If set to 0, the indexer checks child process status every second. |
QuarantineFutureSecs | Integer | Events with timestamp of QuarantineFutureSecs newer than 'now' are dropped into quarantine bucket. |
QuarantinePastSecs | Integer | Events with timestamp of quarantinePastSecs older than 'now' are dropped into quarantine bucket. |
RawChunkSizeBytes | Integer | Target uncompressed size in bytes for individual raw slice in the rawdata journal of the index. 0 is not a valid value. If 0 is specified, rawChunkSizeBytes is set to the default value. |
RepFactor | String | Index replication control. This parameter applies to only clustering slaves. auto = Use the master index replication configuration value. 0 = Turn off replication for this index. |
RotatePeriodInSecs | Integer | How frequently (in seconds) to check if a new hot bucket needs to be created. Also, how frequently to check if there are any warm/cold buckets that should be rolled/frozen. |
ServiceMetaPeriod | Integer | Defines how frequently metadata is synced to disk, in seconds. |
SuppressBannerList | String | List of indexes for which we suppress 'index missing' warning banner messages. This is a global setting, not a per index setting. |
Sync | String | Specifies the number of events that trigger the indexer to sync events. This is a global setting, not a per index setting. |
SyncMeta | Boolean | When true, a sync operation is called before file descriptor is closed on metadata file updates. This functionality improves integrity of metadata files, especially in regards to operating system crashes/machine failures. |
ThawedPath | String | An absolute path that contains the thawed (resurrected) databases for the index. Cannot be defined in terms of a volume definition. |
ThawedPathExpanded | String | Absolute filepath to the thawed (resurrected) databases. |
ThrottleCheckPeriod | Integer | Defines how frequently Splunk software checks for index throttling condition, in seconds. |
TotalEventCount | Integer | Total number of events in the index. |
TsidxDedupPostingsListMaxTermsLimit | Integer | This setting is valid only when tsidxWritingLevel is at 4 or higher. This maximum term limit sets an upper bound on the number of terms kept inside an in-memory hash table that serves to improve tsidx compression. |
TstatsHomePath | String | Location to store datamodel acceleration TSIDX data for this index. If specified, it must be defined in terms of a volume definition. Path must be writable. |
WarmToColdScript | String | Path to a script to run when moving data from warm to cold. |
Success | Boolean | Boolean indicating whether the stored procedure was executed successfully. |
ErrorCode | Integer | The error code in case the procedure is not executed successfully. |
ErrorMessage | String | The error message in case the procedure is not executed successfully. |
Update a saved search.
Name | Type | Required | Description |
Name | String | True | A name for the search |
Search | String | False | The search query to save |
Description | String | False | Description of this saved search. |
CronSchedule | String | False | The cron schedule to execute this search. For example: */5 * * * * causes the search to execute every 5 minutes. |
Disabled | Boolean | False | Indicates if this saved search is disabled. |
IsScheduled | Boolean | False | Indicates if this search is to be run on a schedule. |
IsVisible | Boolean | False | Indicates if this saved search appears in the visible saved search list. |
RealTimeSchedule | Boolean | False | If this value is set to 1, the scheduler bases its determination of the next scheduled search execution time on the current time. If this value is set to 0, it is determined based on the last search execution time. |
RunOnStartup | Boolean | False | Indicates whether this search runs on startup. If it does not run on startup, it runs at the next scheduled time. |
SchedulePriority | String | False | Indicates the scheduling priority of a specific search.
The allowed values are default, higher, highest. |
Name | Type | Description |
Success | Boolean | Returns the success status of the created saved search. |
Message | String | Warnings from the server. |
You can query the system tables described in this section to access schema information, information on data source functionality, and batch operation statistics.
The following tables return database metadata for Splunk:
The following tables return information about how to connect to and query the data source:
The following table returns query statistics for data modification queries:
Lists the available databases.
The following query retrieves all databases determined by the connection string:
SELECT * FROM sys_catalogs
Name | Type | Description |
CatalogName | String | The database name. |
Lists the available schemas.
The following query retrieves all available schemas:
SELECT * FROM sys_schemas
Name | Type | Description |
CatalogName | String | The database name. |
SchemaName | String | The schema name. |
Lists the available tables.
The following query retrieves the available tables and views:
SELECT * FROM sys_tables
Name | Type | Description |
CatalogName | String | The database containing the table or view. |
SchemaName | String | The schema containing the table or view. |
TableName | String | The name of the table or view. |
TableType | String | The table type (table or view). |
Description | String | A description of the table or view. |
IsUpdateable | Boolean | Whether the table can be updated. |
Describes the columns of the available tables and views.
The following query returns the columns and data types for the DataModels table:
SELECT ColumnName, DataTypeName FROM sys_tablecolumns WHERE TableName='DataModels'
Name | Type | Description |
CatalogName | String | The name of the database containing the table or view. |
SchemaName | String | The schema containing the table or view. |
TableName | String | The name of the table or view containing the column. |
ColumnName | String | The column name. |
DataTypeName | String | The data type name. |
DataType | Int32 | An integer indicating the data type. This value is determined at run time based on the environment. |
Length | Int32 | The storage size of the column. |
DisplaySize | Int32 | The designated column's normal maximum width in characters. |
NumericPrecision | Int32 | The maximum number of digits in numeric data. The column length in characters for character and date-time data. |
NumericScale | Int32 | The column scale or number of digits to the right of the decimal point. |
IsNullable | Boolean | Whether the column can contain null. |
Description | String | A brief description of the column. |
Ordinal | Int32 | The sequence number of the column. |
IsAutoIncrement | String | Whether the column value is assigned in fixed increments. |
IsGeneratedColumn | String | Whether the column is generated. |
IsHidden | Boolean | Whether the column is hidden. |
IsArray | Boolean | Whether the column is an array. |
IsReadOnly | Boolean | Whether the column is read-only. |
IsKey | Boolean | Indicates whether a field returned from sys_tablecolumns is the primary key of the table. |
Lists the available stored procedures.
The following query retrieves the available stored procedures:
SELECT * FROM sys_procedures
Name | Type | Description |
CatalogName | String | The database containing the stored procedure. |
SchemaName | String | The schema containing the stored procedure. |
ProcedureName | String | The name of the stored procedure. |
Description | String | A description of the stored procedure. |
ProcedureType | String | The type of the procedure, such as PROCEDURE or FUNCTION. |
Describes stored procedure parameters.
The following query returns information about all of the input parameters for the SelectEntries stored procedure:
SELECT * FROM sys_procedureparameters WHERE ProcedureName='SelectEntries' AND Direction=1 OR Direction=2
Name | Type | Description |
CatalogName | String | The name of the database containing the stored procedure. |
SchemaName | String | The name of the schema containing the stored procedure. |
ProcedureName | String | The name of the stored procedure containing the parameter. |
ColumnName | String | The name of the stored procedure parameter. |
Direction | Int32 | An integer corresponding to the type of the parameter: input (1), input/output (2), or output(4). input/output type parameters can be both input and output parameters. |
DataTypeName | String | The name of the data type. |
DataType | Int32 | An integer indicating the data type. This value is determined at run time based on the environment. |
Length | Int32 | The number of characters allowed for character data. The number of digits allowed for numeric data. |
NumericPrecision | Int32 | The maximum precision for numeric data. The column length in characters for character and date-time data. |
NumericScale | Int32 | The number of digits to the right of the decimal point in numeric data. |
IsNullable | Boolean | Whether the parameter can contain null. |
IsRequired | Boolean | Whether the parameter is required for execution of the procedure. |
IsArray | Boolean | Whether the parameter is an array. |
Description | String | The description of the parameter. |
Ordinal | Int32 | The index of the parameter. |
Describes the primary and foreign keys.
The following query retrieves the primary key for the DataModels table:
SELECT * FROM sys_keycolumns WHERE IsKey='True' AND TableName='DataModels'
Name | Type | Description |
CatalogName | String | The name of the database containing the key. |
SchemaName | String | The name of the schema containing the key. |
TableName | String | The name of the table containing the key. |
ColumnName | String | The name of the key column. |
IsKey | Boolean | Whether the column is a primary key in the table referenced in the TableName field. |
IsForeignKey | Boolean | Whether the column is a foreign key referenced in the TableName field. |
PrimaryKeyName | String | The name of the primary key. |
ForeignKeyName | String | The name of the foreign key. |
ReferencedCatalogName | String | The database containing the primary key. |
ReferencedSchemaName | String | The schema containing the primary key. |
ReferencedTableName | String | The table containing the primary key. |
ReferencedColumnName | String | The column name of the primary key. |
Describes the foreign keys.
The following query retrieves all foreign keys which refer to other tables:
SELECT * FROM sys_foreignkeys WHERE ForeignKeyType = 'FOREIGNKEY_TYPE_IMPORT'
Name | Type | Description |
CatalogName | String | The name of the database containing the key. |
SchemaName | String | The name of the schema containing the key. |
TableName | String | The name of the table containing the key. |
ColumnName | String | The name of the key column. |
PrimaryKeyName | String | The name of the primary key. |
ForeignKeyName | String | The name of the foreign key. |
ReferencedCatalogName | String | The database containing the primary key. |
ReferencedSchemaName | String | The schema containing the primary key. |
ReferencedTableName | String | The table containing the primary key. |
ReferencedColumnName | String | The column name of the primary key. |
ForeignKeyType | String | Designates whether the foreign key is an import (points to other tables) or export (referenced from other tables) key. |
Describes the primary keys.
The following query retrieves the primary keys from all tables and views:
SELECT * FROM sys_primarykeys
Name | Type | Description |
CatalogName | String | The name of the database containing the key. |
SchemaName | String | The name of the schema containing the key. |
TableName | String | The name of the table containing the key. |
ColumnName | String | The name of the key column. |
KeySeq | String | The sequence number of the primary key. |
KeyName | String | The name of the primary key. |
Describes the available indexes. By filtering on indexes, you can write more selective queries with faster query response times.
The following query retrieves all indexes that are not primary keys:
SELECT * FROM sys_indexes WHERE IsPrimary='false'
Name | Type | Description |
CatalogName | String | The name of the database containing the index. |
SchemaName | String | The name of the schema containing the index. |
TableName | String | The name of the table containing the index. |
IndexName | String | The index name. |
ColumnName | String | The name of the column associated with the index. |
IsUnique | Boolean | True if the index is unique. False otherwise. |
IsPrimary | Boolean | True if the index is a primary key. False otherwise. |
Type | Int16 | An integer value corresponding to the index type: statistic (0), clustered (1), hashed (2), or other (3). |
SortOrder | String | The sort order: A for ascending or D for descending. |
OrdinalPosition | Int16 | The sequence number of the column in the index. |
Returns information on the available connection properties and those set in the connection string.
When querying this table, the config connection string should be used:
jdbc:cdata:splunk:config:
This connection string enables you to query this table without a valid connection.
The following query retrieves all connection properties that have been set in the connection string or set through a default value:
SELECT * FROM sys_connection_props WHERE Value <> ''
Name | Type | Description |
Name | String | The name of the connection property. |
ShortDescription | String | A brief description. |
Type | String | The data type of the connection property. |
Default | String | The default value if one is not explicitly set. |
Values | String | A comma-separated list of possible values. A validation error is thrown if another value is specified. |
Value | String | The value you set or a preconfigured default. |
Required | Boolean | Whether the property is required to connect. |
Category | String | The category of the connection property. |
IsSessionProperty | String | Whether the property is a session property, used to save information about the current connection. |
Sensitivity | String | The sensitivity level of the property. This informs whether the property is obfuscated in logging and authentication forms. |
PropertyName | String | A camel-cased truncated form of the connection property name. |
Ordinal | Int32 | The index of the parameter. |
CatOrdinal | Int32 | The index of the parameter category. |
Hierarchy | String | Shows dependent properties associated that need to be set alongside this one. |
Visible | Boolean | Informs whether the property is visible in the connection UI. |
ETC | String | Various miscellaneous information about the property. |
Describes the SELECT query processing that the Cloud can offload to the data source.
See SQL Compliance for SQL syntax details.
Below is an example data set of SQL capabilities. Some aspects of SELECT functionality are returned in a comma-separated list if supported; otherwise, the column contains NO.
Name | Description | Possible Values |
AGGREGATE_FUNCTIONS | Supported aggregation functions. | AVG, COUNT, MAX, MIN, SUM, DISTINCT |
COUNT | Whether COUNT function is supported. | YES, NO |
IDENTIFIER_QUOTE_OPEN_CHAR | The opening character used to escape an identifier. | [ |
IDENTIFIER_QUOTE_CLOSE_CHAR | The closing character used to escape an identifier. | ] |
SUPPORTED_OPERATORS | A list of supported SQL operators. | =, >, <, >=, <=, <>, !=, LIKE, NOT LIKE, IN, NOT IN, IS NULL, IS NOT NULL, AND, OR |
GROUP_BY | Whether GROUP BY is supported, and, if so, the degree of support. | NO, NO_RELATION, EQUALS_SELECT, SQL_GB_COLLATE |
OJ_CAPABILITIES | The supported varieties of outer joins supported. | NO, LEFT, RIGHT, FULL, INNER, NOT_ORDERED, ALL_COMPARISON_OPS |
OUTER_JOINS | Whether outer joins are supported. | YES, NO |
SUBQUERIES | Whether subqueries are supported, and, if so, the degree of support. | NO, COMPARISON, EXISTS, IN, CORRELATED_SUBQUERIES, QUANTIFIED |
STRING_FUNCTIONS | Supported string functions. | LENGTH, CHAR, LOCATE, REPLACE, SUBSTRING, RTRIM, LTRIM, RIGHT, LEFT, UCASE, SPACE, SOUNDEX, LCASE, CONCAT, ASCII, REPEAT, OCTET, BIT, POSITION, INSERT, TRIM, UPPER, REGEXP, LOWER, DIFFERENCE, CHARACTER, SUBSTR, STR, REVERSE, PLAN, UUIDTOSTR, TRANSLATE, TRAILING, TO, STUFF, STRTOUUID, STRING, SPLIT, SORTKEY, SIMILAR, REPLICATE, PATINDEX, LPAD, LEN, LEADING, KEY, INSTR, INSERTSTR, HTML, GRAPHICAL, CONVERT, COLLATION, CHARINDEX, BYTE |
NUMERIC_FUNCTIONS | Supported numeric functions. | ABS, ACOS, ASIN, ATAN, ATAN2, CEILING, COS, COT, EXP, FLOOR, LOG, MOD, SIGN, SIN, SQRT, TAN, PI, RAND, DEGREES, LOG10, POWER, RADIANS, ROUND, TRUNCATE |
TIMEDATE_FUNCTIONS | Supported date/time functions. | NOW, CURDATE, DAYOFMONTH, DAYOFWEEK, DAYOFYEAR, MONTH, QUARTER, WEEK, YEAR, CURTIME, HOUR, MINUTE, SECOND, TIMESTAMPADD, TIMESTAMPDIFF, DAYNAME, MONTHNAME, CURRENT_DATE, CURRENT_TIME, CURRENT_TIMESTAMP, EXTRACT |
REPLICATION_SKIP_TABLES | Indicates tables skipped during replication. | |
REPLICATION_TIMECHECK_COLUMNS | A string array containing a list of columns which will be used to check for (in the given order) to use as a modified column during replication. | |
IDENTIFIER_PATTERN | String value indicating what string is valid for an identifier. | |
SUPPORT_TRANSACTION | Indicates if the provider supports transactions such as commit and rollback. | YES, NO |
DIALECT | Indicates the SQL dialect to use. | |
KEY_PROPERTIES | Indicates the properties which identify the uniform database. | |
SUPPORTS_MULTIPLE_SCHEMAS | Indicates if multiple schemas may exist for the provider. | YES, NO |
SUPPORTS_MULTIPLE_CATALOGS | Indicates if multiple catalogs may exist for the provider. | YES, NO |
DATASYNCVERSION | The CData Data Sync version needed to access this driver. | Standard, Starter, Professional, Enterprise |
DATASYNCCATEGORY | The CData Data Sync category of this driver. | Source, Destination, Cloud Destination |
SUPPORTSENHANCEDSQL | Whether enhanced SQL functionality beyond what is offered by the API is supported. | TRUE, FALSE |
SUPPORTS_BATCH_OPERATIONS | Whether batch operations are supported. | YES, NO |
SQL_CAP | All supported SQL capabilities for this driver. | SELECT, INSERT, DELETE, UPDATE, TRANSACTIONS, ORDERBY, OAUTH, ASSIGNEDID, LIMIT, LIKE, BULKINSERT, COUNT, BULKDELETE, BULKUPDATE, GROUPBY, HAVING, AGGS, OFFSET, REPLICATE, COUNTDISTINCT, JOINS, DROP, CREATE, DISTINCT, INNERJOINS, SUBQUERIES, ALTER, MULTIPLESCHEMAS, GROUPBYNORELATION, OUTERJOINS, UNIONALL, UNION, UPSERT, GETDELETED, CROSSJOINS, GROUPBYCOLLATE, MULTIPLECATS, FULLOUTERJOIN, MERGE, JSONEXTRACT, BULKUPSERT, SUM, SUBQUERIESFULL, MIN, MAX, JOINSFULL, XMLEXTRACT, AVG, MULTISTATEMENTS, FOREIGNKEYS, CASE, LEFTJOINS, COMMAJOINS, WITH, LITERALS, RENAME, NESTEDTABLES, EXECUTE, BATCH, BASIC, INDEX |
PREFERRED_CACHE_OPTIONS | A string value specifies the preferred cacheOptions. | |
ENABLE_EF_ADVANCED_QUERY | Indicates if the driver directly supports advanced queries coming from Entity Framework. If not, queries will be handled client side. | YES, NO |
PSEUDO_COLUMNS | A string array indicating the available pseudo columns. | |
MERGE_ALWAYS | If the value is true, The Merge Mode is forcibly executed in Data Sync. | TRUE, FALSE |
REPLICATION_MIN_DATE_QUERY | A select query to return the replicate start datetime. | |
REPLICATION_MIN_FUNCTION | Allows a provider to specify the formula name to use for executing a server side min. | |
REPLICATION_START_DATE | Allows a provider to specify a replicate startdate. | |
REPLICATION_MAX_DATE_QUERY | A select query to return the replicate end datetime. | |
REPLICATION_MAX_FUNCTION | Allows a provider to specify the formula name to use for executing a server side max. | |
IGNORE_INTERVALS_ON_INITIAL_REPLICATE | A list of tables which will skip dividing the replicate into chunks on the initial replicate. | |
CHECKCACHE_USE_PARENTID | Indicates whether the CheckCache statement should be done against the parent key column. | TRUE, FALSE |
CREATE_SCHEMA_PROCEDURES | Indicates stored procedures that can be used for generating schema files. |
The following query retrieves the operators that can be used in the WHERE clause:
SELECT * FROM sys_sqlinfo WHERE Name = 'SUPPORTED_OPERATORS'
Note that individual tables may have different limitations or requirements on the WHERE clause; refer to the Data Model section for more information.
Name | Type | Description |
NAME | String | A component of SQL syntax, or a capability that can be processed on the server. |
VALUE | String | Detail on the supported SQL or SQL syntax. |
Returns information about attempted modifications.
The following query retrieves the Ids of the modified rows in a batch operation:
SELECT * FROM sys_identity
Name | Type | Description |
Id | String | The database-generated Id returned from a data modification operation. |
Batch | String | An identifier for the batch. 1 for a single operation. |
Operation | String | The result of the operation in the batch: INSERTED, UPDATED, or DELETED. |
Message | String | SUCCESS or an error message if the update in the batch failed. |
The connection string properties are the various options that can be used to establish a connection. This section provides a complete list of the options you can configure in the connection string for this provider. Click the links for further details.
For more information on establishing a connection, see Establishing a Connection.
Property | Description |
AuthScheme | Whether to use Basic Authentication, AccessToken or HTTPEventCollectorToken Authentication when connecting to Splunk. |
AccessToken | The Access Token used for accessing your Splunk account. |
HTTPEventCollectorToken | The HTTP Event Collector Token is used for accessing the HTTP Event Controller feature on your Splunk account. |
URL | The URL to your Splunk endpoint. |
User | The Splunk user account used to authenticate. |
Password | The password used to authenticate the user. |
Property | Description |
SSLServerCert | The certificate to be accepted from the server when connecting using TLS/SSL. |
Property | Description |
Verbosity | The verbosity level that determines the amount of detail included in the log file. |
Property | Description |
BrowsableSchemas | This property restricts the schemas reported to a subset of the available schemas. For example, BrowsableSchemas=SchemaA,SchemaB,SchemaC. |
Property | Description |
IncludeInternalFields | Whether or not the CData ADO.NET Provider for Splunk should push the internal fields. These fields include: user, eventtype, etc. |
MaxRows | Limits the number of rows returned when no aggregation or GROUP BY is used in the query. This takes precedence over LIMIT clauses. |
MaxThreads | Specifies the number of concurrent requests. Only used when UseJobs is true. |
Pagesize | The maximum number of results to return per page from Splunk. |
PseudoColumns | This property indicates whether or not to include pseudo columns as columns to the table. |
RowScanDepth | Set this property to control the number of rows scanned when TypeDetectionScheme is set to RowScan. |
Timeout | The value in seconds until the timeout error is thrown, canceling the operation. |
TypeDetectionScheme | Determines how to determine the data type of columns. |
UseJobs | Specifies whether to use the jobs endpoint instead of the export endpoint. If set to true, the maximum number of returned rows is configured Splunk's limit.conf file. |
This section provides a complete list of the Authentication properties you can configure in the connection string for this provider.
Property | Description |
AuthScheme | Whether to use Basic Authentication, AccessToken or HTTPEventCollectorToken Authentication when connecting to Splunk. |
AccessToken | The Access Token used for accessing your Splunk account. |
HTTPEventCollectorToken | The HTTP Event Collector Token is used for accessing the HTTP Event Controller feature on your Splunk account. |
URL | The URL to your Splunk endpoint. |
User | The Splunk user account used to authenticate. |
Password | The password used to authenticate the user. |
Whether to use Basic Authentication, AccessToken or HTTPEventCollectorToken Authentication when connecting to Splunk.
string
"Basic"
The Access Token used for accessing your Splunk account.
string
""
The Access Token used for accessing your Splunk account.
The HTTP Event Collector Token is used for accessing the HTTP Event Controller feature on your Splunk account.
string
""
The HTTP Event Collector Token is used for accessing the HTTP Event Controller feature on your Splunk account.
The URL to your Splunk endpoint.
string
""
The URL to your Splunk endpoint; for example, https://yoursitename.splunk.com:8089.
The port should be set to the Splunk management port (default 8089).
The Splunk user account used to authenticate.
string
""
Together with Password, this field is used to authenticate against the Splunk server.
The password used to authenticate the user.
string
""
The User and Password are together used to authenticate with the server.
This section provides a complete list of the SSL properties you can configure in the connection string for this provider.
Property | Description |
SSLServerCert | The certificate to be accepted from the server when connecting using TLS/SSL. |
The certificate to be accepted from the server when connecting using TLS/SSL.
string
""
If using a TLS/SSL connection, this property can be used to specify the TLS/SSL certificate to be accepted from the server. Any other certificate that is not trusted by the machine is rejected.
This property can take the following forms:
Description | Example |
A full PEM Certificate (example shortened for brevity) | -----BEGIN CERTIFICATE----- MIIChTCCAe4CAQAwDQYJKoZIhv......Qw== -----END CERTIFICATE----- |
A path to a local file containing the certificate | C:\cert.cer |
The public key (example shortened for brevity) | -----BEGIN RSA PUBLIC KEY----- MIGfMA0GCSq......AQAB -----END RSA PUBLIC KEY----- |
The MD5 Thumbprint (hex values can also be either space or colon separated) | ecadbdda5a1529c58a1e9e09828d70e4 |
The SHA1 Thumbprint (hex values can also be either space or colon separated) | 34a929226ae0819f2ec14b4a3d904f801cbb150d |
If not specified, any certificate trusted by the machine is accepted.
Use '*' to signify to accept all certificates. Note that this is not recommended due to security concerns.
This section provides a complete list of the Logging properties you can configure in the connection string for this provider.
Property | Description |
Verbosity | The verbosity level that determines the amount of detail included in the log file. |
The verbosity level that determines the amount of detail included in the log file.
string
"1"
The verbosity level determines the amount of detail that the Cloud reports to the Logfile. Verbosity levels from 1 to 5 are supported. These are detailed in the Logging page.
This section provides a complete list of the Schema properties you can configure in the connection string for this provider.
Property | Description |
BrowsableSchemas | This property restricts the schemas reported to a subset of the available schemas. For example, BrowsableSchemas=SchemaA,SchemaB,SchemaC. |
This property restricts the schemas reported to a subset of the available schemas. For example, BrowsableSchemas=SchemaA,SchemaB,SchemaC.
string
""
Listing the schemas from databases can be expensive. Providing a list of schemas in the connection string improves the performance.
This section provides a complete list of the Miscellaneous properties you can configure in the connection string for this provider.
Property | Description |
IncludeInternalFields | Whether or not the CData ADO.NET Provider for Splunk should push the internal fields. These fields include: user, eventtype, etc. |
MaxRows | Limits the number of rows returned when no aggregation or GROUP BY is used in the query. This takes precedence over LIMIT clauses. |
MaxThreads | Specifies the number of concurrent requests. Only used when UseJobs is true. |
Pagesize | The maximum number of results to return per page from Splunk. |
PseudoColumns | This property indicates whether or not to include pseudo columns as columns to the table. |
RowScanDepth | Set this property to control the number of rows scanned when TypeDetectionScheme is set to RowScan. |
Timeout | The value in seconds until the timeout error is thrown, canceling the operation. |
TypeDetectionScheme | Determines how to determine the data type of columns. |
UseJobs | Specifies whether to use the jobs endpoint instead of the export endpoint. If set to true, the maximum number of returned rows is configured Splunk's limit.conf file. |
Whether or not the CData ADO.NET Provider for Splunk should push the internal fields. These fields include: user, eventtype, etc.
bool
false
Whether or not the CData Cloud should push the internal fields. These fields include: user, eventtype, etc.
Limits the number of rows returned when no aggregation or GROUP BY is used in the query. This takes precedence over LIMIT clauses.
int
-1
Limits the number of rows returned when no aggregation or GROUP BY is used in the query. This takes precedence over LIMIT clauses.
Specifies the number of concurrent requests. Only used when UseJobs is true.
string
"5"
This property allows you to issue multiple requests simultaneously, thereby improving performance. Default value is 5 threads. Setting a higher value can result in OutOfMemory issues.
The maximum number of results to return per page from Splunk.
int
10000
The Pagesize property affects the maximum number of results to return per page from Splunk. Setting a higher value may result in better performance at the cost of additional memory allocated per page consumed.
This property indicates whether or not to include pseudo columns as columns to the table.
string
""
This setting is particularly helpful in Entity Framework, which does not allow you to set a value for a pseudo column unless it is a table column. The value of this connection setting is of the format "Table1=Column1, Table1=Column2, Table2=Column3". You can use the "*" character to include all tables and all columns; for example, "*=*".
Set this property to control the number of rows scanned when TypeDetectionScheme is set to RowScan.
string
"50"
Determines the number of rows used to determine the column data types.
Setting a high value may decrease performance. Setting a low value may prevent the data type from being determined properly, especially when there is null data.
The value in seconds until the timeout error is thrown, canceling the operation.
int
60
If Timeout = 0, operations do not time out. The operations run until they complete successfully or until they encounter an error condition.
If Timeout expires and the operation is not yet complete, the Cloud throws an exception.
Determines how to determine the data type of columns.
string
"RowScan"
None | Setting TypeDetectionScheme to None will return all columns as the string type. |
RowScan | Setting TypeDetectionScheme to RowScan will scan rows to heuristically determine the data type. The RowScanDepth determines the number of rows to be scanned. |
Specifies whether to use the jobs endpoint instead of the export endpoint. If set to true, the maximum number of returned rows is configured Splunk's limit.conf file.
bool
false
Whether to use the jobs endpoint instead of the export endpoint. While Jobs generally provide higher performance, the initial response time may be longer. If a Timeout error occurs, set the Timeout connection property to a higher value.