CData Cloud は、クラウドホスト型のソリューションで、複数の標準サービスやプロトコルにまたがるSplunk へのアクセスを実現します。MySQL またはSQL Server データベースに接続できるアプリケーションであれば、CData Cloud を介してSplunk に接続できます。
CData Cloud により、他のOData エンドポイントや標準SQL Server / MySQL データベースと同じように、Splunk への接続を標準化し、構成することができます。
このページでは、CData Cloud でのSplunk への接続の確立 のガイド、利用可能なリソースに関する情報、および使用可能な接続プロパティのリファレンスについて説明します。
接続の確立 は、CData Cloud にデータベースを作成するためのSplunk への認証方法と必要な接続プロパティの設定方法について示します。
利用可能な標準サービスを経由してSplunk からデータにアクセスする方法と、CData Cloud の管理については、CData Cloud ドキュメント で詳しく説明します。
Database タブで対応するアイコンを選択して、Splunk に接続します。必須プロパティはSettings にリストされています。Advanced タブには、通常は必要ない接続プロパティが表示されます。
URL を、有効なSplunk サーバーに指定する必要があります。デフォルトでは、Cloud はポート8089 でリクエストを行います。
デフォルトでは、Cloud はサーバーとのTLS / SSL ネゴシエーションを試みます。
Splunk データを認証する方法は2つあります。Splunk クレデンシャルでログインする方法と、Splunk 認証トークンを使用する方法です。
Splunk クレデンシャルで認証するには、User とPassword をログインクレデンシャルに設定します。
認証トークンを介してSplunk にアクセスすると、Representational State Transfer(REST)コールを使用してSplunk プラットフォームにアクセスできます。 Splunk Enterprise では、CLI を使用することも可能です。これらの方法はいずれも、クレデンシャルを介して認証することなくインスタンスにアクセスし、リクエストを行うことを可能にします。
Note: サーチヘッドクラスター(利用可能なすべてのヘッドクラスターにアクセスするために同じトークンを使用できます)にアクセスする場合を除き、アクセスするインスタンスごとに個別のトークンを持つ必要があります。
Splunk トークンで認証するには:
デフォルトでは、Cloud はサーバーの証明書をシステムの信頼できる証明書ストアと照合してSSL / TLS のネゴシエーションを試みます。
別の証明書を指定するには、利用可能なフォーマットについてSSLServerCert プロパティを参照してください。
Windows のシステムプロキシ経由の接続では、接続プロパティを追加で設定する必要はありません。他のプロキシに接続するには、ProxyAutoDetect をfalse に設定します。
さらにHTTP プロキシへの認証には、ProxyServer とProxyPort に加えてProxyAuthScheme、ProxyUser、およびProxyPassword を設定します。
次のプロパティを設定します。
Cloud は、Splunk レポート、検索、データセット、およびデータモデルをリレーショナルデータベースのテーブルとしてモデル化します。SQL-92 クエリを使って読み出し/ 書き込みが可能です。
アカウント内のすべてのテーブルを操作できます。接続時に、Cloud はSplunk からメタデータを取得して動的にテーブルスキーマの変更を反映します。
CreateSchema ストアドプロシージャを呼び出して、静的スキーマを接続間で永続化できます。ストアドプロシージャは、スキーマをテキストファイルに保存します。テキストファイルはシンプルな形式で、スキーマのカスタマイズも容易になります。
データセット、データモデル、および検索の更新とクエリの詳細については、テーブル を参照してください。
Cloud はまた、次のSplunk オブジェクトを表すビュー を介してデータを表示します。
Cloud はSplunk のデータを、標準のSQL ステートメントを使用してクエリできるリレーショナルデータベースのテーブルのリストとしてモデル化します。
Name | Description |
DataModels | Create, query, update, and delete data models in Splunk. |
Datasets | Create, query, update, and delete datasets in Splunk. |
SearchJobs | Create, query, update, and delete search jobs in Splunk. |
Create, query, update, and delete data models in Splunk.
Cloud は、Splunk API を使用してId カラムを参照する検索条件を処理します。このカラムは= 演算子のサーバー側の処理をサポートします。Cloud はクライアント側でCloud 内で他のフィルタを処理します。
例えば、次のクエリはSplunk API によってサーバー側で処理されます。
SELECT * FROM DataModels WHERE Id = 'SampleModel'
Id カラムは挿入の最小要件です。挿入では、DataModels テーブルではId とAcceleration カラムのみが許容されます。
INSERT INTO DataModels (Id, Acceleration) VALUES ('initialname', '{"enabled":false,"earliest_time":"","hunk.file_format":"","hunk.dfs_block_size":0,"hunk.compression_codec":""}' )
DataModels テーブルでは、Id が指定されている場合にAcceleration カラムの更新が可能です。Provisional 疑似カラムを設定することもできます。
UPDATE DataModels SET Provisional = 'true', Acceleration = '{"enabled":false,"earliest_time": "-1mon", "cron_schedule": "0 */12 * * *","hunk.file_format":"","hunk.dfs_block_size":0,"hunk.compression_codec":""}' WHERE Id = 'initialname'
DataModels テーブルでは、Id が指定されている場合にレコードの削除が可能です。
DELETE FROM Datamodels WHERE Id = 'initialname'
Name | Type | ReadOnly | References | Description |
Id [KEY] | String | True |
Link of the data model. | |
Disabled | Boolean | True |
Indicates if the data model is disabled/enabled. | |
UpdatedAt | Datetime | True |
Datetime of the last update of the data model. | |
Description | String | True |
Description of the data model. | |
Name | String | False |
The name of the data model in Splunk. | |
DisplayName | String | True |
The name displayed for the data model in Splunk. | |
Author | String | True |
Splunk user who created the data model. | |
App | String | True |
Splunk app where the data model is shared. | |
Owner | String | True |
Splunk user who owns the data model. | |
CanShareApp | Boolean | True |
Boolean indicating whether the data model can be shared in an app. | |
CanShareGlobal | Boolean | True |
Boolean indicating whether the data model can be shared globally. | |
CanShareUser | Boolean | True |
Boolean indicating whether the data model can be shared by the user. | |
CanWrite | Boolean | True |
Boolean indicating whether the data model can be extended by the user. | |
Modifiable | Boolean | True |
Boolean indicating whether the data model can be modified. | |
Removable | Boolean | True |
Boolean indicating whether the data model can be removed. | |
Acceleration | String | False |
Acceleration settings for the data model. Supply JSON to specify any or all of the following settings: enabled (true or false), earliest_time (time modifier), or cron_schedule (cron string). | |
AccelerationAllowed | Boolean | True |
Boolean indicating that acceleration is allowed or not for the data model. | |
AccelerationHunkCompression | String | True |
Specifies the compression codec to be used for the accelerated orc or parquet format files. | |
DatasetCommands | String | True |
Data model commands. | |
DatasetDescription | String | True |
The JSON describing the data model. | |
DatasetCurrentCommand | Integer | True |
Current command of the data model. | |
DatasetEarliestTime | Datetime | True |
Earliest time of data model events being processed. | |
DatasetLatestTime | Datetime | True |
Latest time of data model events being processed. | |
DatasetDiversity | String | True |
Diversity of events being processed. | |
DatasetLimiting | Integer | True |
Limitations of events being processed. | |
DatasetMode | String | True |
Search mode events being processed. | |
DatasetSampleRatio | String | True |
Sample ratio of the data model. | |
DatasetFields | String | True |
Indexed fields the data model has. | |
DatasetType | String | True |
Dataset type. | |
Type | String | True |
Data model type. | |
Digest | String | True |
Content digest type. | |
TagsWhitelist | String | True |
Whitelist of data model tags. | |
ReadPermitions | String | True |
Permissions to read this data model. | |
WritePermitions | String | True |
Permissions to write to this data model. | |
Sharing | String | True |
Data model sharing type. | |
Username | String | True |
Username of the Splunk user. |
SELECT ステートメントのWHERE 句では、疑似カラムフィールドを使用して、データソースから返されるタプルを詳細に制御することができます。
Name | Type | Description |
Provisional | Boolean |
Indicates whether the data model is provisional. Provisional data models are not saved. Specify true to validate a data model before saving it. |
Create, query, update, and delete datasets in Splunk.
The Datasets table requires DataModelId in the WHERE clause. The DataModelId column supports server-side processing for the = operator. The Cloud processes other search criteria client-side within the Cloud.
SELECT * FROM DataSets WHERE DataModelId = 'SampleModel'
Splunk allows inserts only when DataModelId, ParentName, and ObjectName are all specified.
INSERT INTO [Datasets] (ObjectName, ParentName, DataModelId) VALUES ('SampleSet', 'BaseEvent', 'SampleModel')
The Datasets table allows updates when DataModelId is specified. The columns that can be updated in this case are the following: Description and DisplayName.
When ObjectName is also specified, you can update the following columns: ObjectDisplayName, ParentName, Comment, Fields, Calculations, Constraints, Lineage, ObjectSearchNoFields, ObjectSearch, AutoextractSearch, PreviewSearch, AccelerationSearch, BaseSearch, and TsidxNamespace.
UPDATE Datasets SET Description = 'model description', DisplayName = 'Model Display Name' WHERE DataModelId = 'SampleModel' UPDATE Datasets SET ParentName = 'BaseEvent', BaseSearch = '| search (index=* OR index=_*) | fields _time, RootObject', AccelerationSearch = ' search (index=* OR index=_*) ' WHERE DataModelId = 'SampleModel' AND ObjectName = 'SampleSet'
Datasets can be deleted by providing the DataModelId and the ObjectName of the dataset.
DELETE FROM Datasets WHERE DataModelId = 'SampleModel' AND ObjectName = 'SampleSet'
Name | Type | ReadOnly | References | Description |
ObjectName [KEY] | String | False |
Name of the dataset object. | |
DatamodelId [KEY] | String | False |
DataModels.Id |
Id of the data model the object belongs to. |
DisplayName | String | False |
Name of the data model the object belongs to. | |
Description | String | False |
Dataset description. | |
ObjectNameList | String | True |
List of the objects in the data model. | |
ObjectDisplayName | String | False |
Name displayed in Splunk for the object. | |
ParentName | String | False |
Name of the Parent Event. | |
Comment | String | False |
Dataset comments. | |
Fields | String | False |
Dataset events indexed fields. | |
Calculations | String | False |
Saved calculations for dataset fields. | |
Constraints | String | False |
Saved constraints for dataset fields. | |
Lineage | String | False |
Dataset lineage. | |
ObjectSearchNoFields | String | False |
Object search query without fields. | |
ObjectSearch | String | False |
Saved search query for the object. | |
AutoextractSearch | String | False |
Search query for autoextraction. | |
PreviewSearch | String | False |
Search preview query. | |
AccelerationSearch | String | False |
Search query including acceleration. | |
BaseSearch | String | False |
Basic search query. | |
TsidxNamespace | String | False |
Allocated namespace. | |
EventBased | Integer | True |
Number of Event-Based objects in the data model. | |
TransactionBased | Integer | True |
Number of Transaction-Based objects in the data model. | |
SearchBased | Integer | True |
Number of Search-Based objects in the data model. |
Create, query, update, and delete search jobs in Splunk.
The Cloud will use the Splunk APIs to process the search Id (Sid) criteria specified in the WHERE clause. The Sid column supports server-side processing for the = operator. The Cloud processes other search criteria client-side within the Cloud.
SELECT * FROM SearchJobs SELECT * FROM SearchJobs WHERE Sid = '123456789.1234'
Splunk allows inserts only when EventSearch is specified. You can insert the Custom, EarliestTime, LatestTime, Label, and StatusBuckets columns and all pseudocolumns.
INSERT INTO SearchJobs (Custom, EventSearch, LatestTime, Timeout) VALUES ('custom1=test1, custom2=test2', ' from datamodel SampleModel', 'now', '60')
The SearchJobs table allows updates of the Custom column only when Sid is specified.
UPDATE SearchJobs SET Custom = 'custom1=test3, custom2=test4' WHERE sid = '123456789.1234'
SearchJobs can be deleted by providing the Sid.
DELETE FROM SearchJobs WHERE Sid = '123456789.1234'
Name | Type | ReadOnly | References | Description |
Sid [KEY] | String | False |
The search Id number. | |
EventSearch | String | False |
Subset of the entire search that is before any transforming commands. | |
Custom | String | False |
Custom job property. In an INSERT operation, pass the values as a comma-separated list of pairs of keys and values. | |
EarliestTime | String | False |
The earliest time a search job is configured to start. | |
LatestTime | String | False |
The latest time a search job is configured to start. | |
CursorTime | String | True |
The earliest time from which no events are later scanned. Can be used to indicate progress. | |
Delegate | String | True |
For saved searches, specifies jobs that were started by the user. Defaults to scheduler. | |
DiskUsage | Long | True |
The total amount of disk space used, in bytes. | |
DispatchState | String | True |
The state of the search. Can be any of QUEUED, PARSING, RUNNING, PAUSED, FINALIZING, FAILED, or DONE. | |
DoneProgress | Double | True |
A number between 0 and 1.0 that indicates the approximate progress of the search. doneProgress = (latestTime-cursorTime) / (latestTime-earliestTime) | |
DropCount | Integer | True |
For real-time searches only, the number of possible events that were dropped due to the rt_queue_size (defaults to 100000). | |
EventAvailableCount | Integer | True |
The number of events that are available for export. | |
EventCount | Integer | True |
The number of events returned by the search. | |
EventFieldCount | Integer | True |
The number of fields found in the search results. | |
EventIsStreaming | Boolean | True |
Indicates if the events of this search are being streamed. | |
EventIsTruncated | Boolean | True |
Indicates if the events of the search are not stored, making them unavailable from the events endpoint for the search. | |
EventPreviewableCount | Integer | True |
Number of in-memory events that are not yet committed to disk. | |
EventSorting | String | True |
Indicates if the events of this search are sorted, and in which order. | |
IsDone | Boolean | True |
Indicates if the search has completed. | |
IsEventsPreviewEnabled | String | True |
Indicates if the timeline_events_preview setting is enabled in limits.conf. | |
IsFailed | Boolean | True |
Indicates if there was a fatal error executing the search. For example, invalid search string syntax. | |
IsFinalized | Boolean | True |
Indicates if the search was finalized (stopped before completion). | |
IsPaused | Boolean | True |
Indicates if the search is paused. | |
IsPreviewEnabled | Boolean | True |
Indicates if previews are enabled. | |
IsRealTimeSearch | Boolean | True |
Indicates if the search is a real-time search. | |
IsRemoteTimeline | Boolean | True |
Indicates if the remote timeline feature is enabled. | |
IsSaved | Boolean | True |
Indicates that the search job is saved on disk. Search artifacts are saved on disk for 7 days from the last time that the job was viewed or touched. | |
IsSavedSearch | Boolean | True |
Indicates if this is a saved search run using the scheduler. | |
IsZombie | Boolean | True |
Indicates if the process running the search died without finishing the search. | |
Keywords | String | True |
All positive keywords used by this search. A positive keyword is a keyword that is not in a NOT clause. | |
Label | String | False |
Custom name created for this search. | |
Messages | String | True |
Errors and debug messages. | |
NumPreviews | Integer | True |
Number of previews generated so far for this search job. | |
Performance | String | True |
A representation of the execution costs. | |
Priority | Integer | True |
An integer between 0-10 that indicates the search priority. | |
RemoteSearch | String | True |
The search string that is sent to every search peer. | |
ReportSearch | String | True |
If reporting commands are used, the reporting search. | |
ResultCount | Integer | True |
The total number of results returned by the search. In other words, this is the subset of scanned events (represented by the ScanCount) that actually matches the search terms. | |
ResultIsStreaming | Boolean | True |
Indicates if the final results of the search are available using streaming (for example, no transforming operations). | |
ResultPreviewCount | Integer | True |
The number of result rows in the latest preview results. | |
RunDuration | Decimal | True |
Time in seconds that the search took to complete. | |
ScanCount | Integer | True |
The number of events that are scanned or read off disk. | |
SearchEarliestTime | Datetime | True |
Specifies the earliest time for a search, as specified in the search command rather than the EarliestTime parameter. It does not snap to the indexed data time bounds for all-time searches. | |
SearchLatestTime | Datetime | True |
Specifies the latest time for a search, as specified in the search command rather than the LatestTime parameter. It does not snap to the indexed data time bounds for all-time searches. | |
SearchProviders | String | True |
A list of all the search peers that were contacted. | |
StatusBuckets | Integer | False |
Maximum number of timeline buckets. | |
TTL | String | True |
The time to live, or the time before the search job expires after it completes. |
SELECT ステートメントのWHERE 句では、疑似カラムフィールドを使用して、データソースから返されるタプルを詳細に制御することができます。
Name | Type | Description |
SearchMode | String |
Searching mode, realtime or normal. If set to realtime, the search runs over the live data. 使用できる値は次のとおりです。normal, realtime |
EnableLookups | Boolean |
Indicates whether lookups should be applied to events. |
AutoPause | Integer |
If specified, the search job pauses after this many seconds of inactivity. (0 means never autopause.) |
AutoCancel | Integer |
If specified, the job automatically cancels after this many seconds of inactivity. (0 means never autocancel.) |
AdhocSearchLevel | Integer |
Specify a search mode. Use one of the following search modes: verbose, fast, or smart. 使用できる値は次のとおりです。verbose, fast, smart |
ForceBundleReplication | Boolean |
Specifies whether this search should cause (and wait depending on the value of SyncBundleReplication) for bundle synchronization with all search peers. |
IndexEarliest | String |
Specify a time string. Sets the earliest inclusive time bounds for the search, based on the index time bounds. |
IndexLatest | String |
Specify a time string. Sets the latest exclusive time bounds for the search, based on the index time bounds. |
IndexedRealtime | Boolean |
Indicates whether or not to use the indexed-realtime mode for real-time searches. |
IndexedRealtimeOffset | Integer |
Sets disk sync delay for indexed real-time search (seconds). |
MaxCount | Integer |
The number of events that can be accessible in any given status bucket. |
MaxTime | Integer |
Comma-separated list of (possibly wildcarded) servers from which raw events should be pulled. |
Namespace | String |
The application namespace in which to restrict searches. |
Now | String |
Specify a time string to set the absolute time used for any relative time specifier in the search. Defaults to the current system time. You can specify a relative time modifier for this parameter. For example, specify +2d to specify the current time plus two days. |
ReduceFrequency | Integer |
Determines how frequently to run the MapReduce reduce phase on accumulated map values. |
ReloadMacros | Boolean |
Specifies whether to reload macro definitions from the configuration file. |
RemoteServerList | Integer |
The number of seconds to run this search before finalizing. Specify 0 to never finalize. |
ReplaySpeed | Integer |
Indicate a real-time search replay speed factor. For example, 1 indicates normal speed, 0.5 indicates half of normal speed, and 2 indicates twice as fast as normal. |
ReplayStartTime | String |
Relative wall-clock start time for the replay. |
ReplayEndTime | String |
Relative end time for the replay clock. The replay stops when the clock time reaches this time. |
ReuseMaxSecondsAgo | Integer |
Specifies the number of seconds ago to check when an identical search is started and return the search Id of the job instead of starting a new job. |
RequiredField | String |
Adds a required field to the search. |
RealTimeBlocking | Boolean |
For a real-time search, indicates if the indexer blocks if the queue for this search is full. |
RealTimeIndexFilter | Boolean |
For a real-time search, indicates if the indexer prefilters events. |
RealTimeMaxBlockSecs | Integer |
For a real-time search with RealTimeBlocking set to true, the maximum time to block. Specify 0 to indicate no limit. |
RealTimeQueueSize | Integer |
For a real-time search, the queue size (in events) that the indexer should use for this search. |
Timeout | Integer |
The number of seconds to keep this search after processing has stopped. |
SyncBundleReplication | String |
Specifies whether this search should wait for bundle replication to complete. |
ビューは、データを示すという点でテーブルに似ていますが、ビューは読み取り専用です。
クエリは、ビューに対して通常のテーブルと同様に実行することができます。
Name | Description |
AlertsInInternalServer | A dataset object in the example InternalServer data model. |
LookUpReport | An example lookup report representing a view based on a saved report in Splunk. |
UploadedModel | An example of a table object inside a data model. |
A dataset object in the example InternalServer data model.
This is an example of a dataset view. These views are generated from dataset objects inside a data model. The Cloud will use the Splunk APIs to process the following query components; the Cloud processes other parts of the query client-side in memory.
All columns support server-side processing for the following operators and functions:
LIMIT, ORDER BY, GROUP BY, and HAVING are also processed server-side. An exception is the case when in the selected columns, there are fields that are not in the GROUP BY, and GROUP BY, criteria, and limiting are handled client-side.
In the case when an unsupported criteria or function is used, all processing will be completed client-side (except selecting specified fields). This is also the case when a SELECT statement has a column that is not in the GroupBy clause.
For example, the Cloud uses the Splunk APIs to process the following queries.
SELECT Component, Timeendpos as Timeend FROM [AlertsInInternalServer] WHERE Component = 'Saved' OR EventType != '' AND Priority IS NOT NULL AND Linecount NOT IN ('1', '2') ORDER BY Priority DESC LIMIT 5 SELECT AVG(Suppressed), Priority FROM [AlertsInInternalServer] GROUP BY Priority HAVING AVG(Suppressed) > 0
Name | Type | Description |
_time | Datetime | |
component | String | |
date_hour | Int | |
date_mday | Int | |
date_minute | Int | |
date_month | String | |
date_second | Int | |
date_wday | String | |
date_year | Int | |
date_zone | Int | |
digest_mode | Int | |
dispatch_time | Int | |
host | String | |
linecount | Int | |
log_level | String | |
priority | String | |
punct | String | |
savedsearch_id | String | |
scheduled_time | Int | |
search_type | String | |
server_alert_actions | String | |
server_app | String | |
server_message | String | |
server_result_count | Int | |
server_run_time | Double | |
server_savedsearch_name | String | |
server_sid | String | |
server_status | String | |
server_user | String | |
source | String | |
sourcetype | String | |
splunk_server | String | |
suppressed | Int | |
thread_id | String | |
timeendpos | Int | |
timestartpos | Int | |
window_time | Int |
An example lookup report representing a view based on a saved report in Splunk.
This is an example of a report view. These views are generated from saved reports in Splunk.
The Cloud will use the Splunk APIs to process the following query components; the Cloud processes other parts of the query client-side in memory.
Runs a saved search, or report, and returns the search results of a saved search. If the search contains replacement placeholder terms, such as $replace_me$, the search processor replaces the placeholders with the strings you specify.
For example:
Will generate the following search statement:
All replacement placeholder terms will be dynamic and saved as Pseudo-Columns.
All columns support server-side processing for the following operators and functions:
LIMIT, ORDER BY, GROUP BY, and HAVING are also processed server-side. An exception is the case when in the selected columns, there are fields that are not in the GROUP BY, and GROUP BY, criteria, and limiting are handled client-side.
In the case when an unsupported criteria or function is used, all processing will be completed client-side (except selecting specified fields). This is also the case when a SELECT statement has a column that is not in the GROUP BY clause.
For example, the Cloud processes the following queries server-side:
SELECT Country, Subregion as Sub FROM LookUpReport WHERE Iso2 != '123' OR continent = 'Europe' AND iso3 NOT IN ('example_1', 'example_2') ORDER BY Country DESC LIMIT 5 SELECT AVG(Iso2), Subregion FROM LookUpReport GROUP BY Subregion HAVING AVG(Iso2) > 0
Name | Type | Description |
continent | String | |
country | String | |
iso2 | String | |
iso3 | String | |
region_un | String | |
region_wb | String | |
subregion | String |
An example of a table object inside a data model.
This is an example of a view generated from a table object inside a data model. The Cloud will use the Splunk APIs to process the following query components; the Cloud processes other parts of the query client-side in memory.
All columns support server-side processing for the following operators and functions.
LIMIT, ORDER BY, GROUP BY, and HAVING are also processed server-side. An exception is the case when in the selected columns, there are fields that are not in the GROUP BY, and GROUP BY, criteria, and limiting are handled client-side.
In the case when an unsupported criteria or function is used, all processing will be completed client-side (except selecting specified fields). This is also the case when a SELECT statement has a column that is not in the GROUP BY clause.
For example, the following queries are processed server side:
SELECT Component, Timeendpos as Timeend FROM [UploadedModel] WHERE Component = 'Saved' OR DEST_CITY_MARKET_ID != '' AND DEST_AIRPORT_ID NOT IN ('1', '2') ORDER BY ORIGIN_AIRPORT_ID DESC LIMIT 5 SELECT AVG(DEST_AIRPORT_ID), ORIGIN_AIRPORT_ID FROM [UploadedModel] GROUP BY ORIGIN_AIRPORT_ID HAVING AVG(DEST_AIRPORT_ID) > 0
Name | Type | Description |
_time | Datetime | |
DEST_AIRPORT_ID | Int | |
DEST_AIRPORT_SEQ_ID | Int | |
DEST_CITY_MARKET_ID | Int | |
host | String | |
linecount | Int | |
ORIGIN_AIRPORT_ID | Int | |
ORIGIN_AIRPORT_SEQ_ID | Int | |
ORIGIN_CITY_MARKET_ID | Int | |
punct | String | |
source | String | |
sourcetype | String | |
splunk_server | String | |
timestamp | String |
ストアドプロシージャはファンクションライクなインターフェースで、Splunk の単純なSELECT/INSERT/UPDATE/DELETE 処理にとどまらずCloud の機能を拡張します。
ストアドプロシージャは、パラメータのリストを受け取り、目的の機能を実行し、プロシージャが成功したか失敗したかを示すとともにSplunk から関連するレスポンスデータを返します。
Name | Description |
CreateHTTPEvent | The HTTP Event Collector (HEC) lets you send data and application events to a Splunk deployment over the HTTP and Secure HTTP (HTTPS) protocols. |
CreateIndex | Create a data index. |
CreateSavedSearch | Create a saved search. |
DeleteIndex | Delete a data index. |
UpdateIndex | Update a data index. |
UpdateSavedSearch | Update a saved search. |
The HTTP Event Collector (HEC) lets you send data and application events to a Splunk deployment over the HTTP and Secure HTTP (HTTPS) protocols.
Name | Type | Required | Description |
EventContent | String | True | The name of the table or view. |
ContentType | String | False | The type of the content specified on the EventContent input. Allowed values: JSON, RAWTEXT. |
ChannelGUID | String | False | The GUID of the channel used for the event. This is requeired in case of ContentType=RAWTEXT. |
Name | Type | Description |
Success | String | Returns the success status of the created event. |
Create a data index.
Name | Type | Required | Description |
Name | String | True | The name of the index to create. |
BlockSignSize | String | False | Controls how many events make up a block for block signatures. If this is set to 0, block signing is disabled for this index. A recommended value is 100. |
BucketRebuildMemoryHint | String | False | Suggestion for the bucket rebuild process for the size of the time-series (tsidx) file to make. Default value, auto, varies by the amount of physical RAM on the host. |
ColdPath | String | False | An absolute path that contains the colddbs for the index. The path must be readable and writable. |
ColdToFrozenDir | String | False | Destination path for the frozen archive. Use as an alternative to a ColdToFrozenScript. |
ColdToFrozenScript | String | False | Path to the archiving script. |
DataType | String | False | Specifies the type of index. |
EnableOnlineBucketRepair | String | False | Enables asynchronous 'online fsck' bucket repair, which runs concurrently with splunk software. When enabled, you do not have to wait until buckets are repaired to start the splunk platform. |
FrozenTimePeriodInSecs | String | False | Number of seconds after which indexed data rolls to frozen. Defaults to 188697600 (6 years). |
HomePath | String | False | An absolute path that contains the hot and warm buckets for the index. |
MaxBloomBackfillBucketAge | String | False | Valid values are: integer[m|s|h|d] if a warm or cold bucket is older than the specified age, do not create or rebuild its bloomfilter. Specify 0 to never rebuild bloomfilters. |
MaxConcurrentOptimizes | String | False | The number of concurrent optimize processes that can run against a hot bucket. |
MaxDataSize | String | False | The maximum size in MB for a hot DB to reach before a roll to warm is triggered. Specifying 'auto' or 'auto_high_volume' causes Splunk software to autotune this parameter (recommended). |
MaxHotBuckets | String | False | Maximum hot buckets that can exist per index. |
MaxHotIdleSecs | String | False | Maximum life, in seconds, of a hot bucket. |
MaxHotSpanSecs | String | False | Upper bound of target maximum timespan of hot/warm buckets in seconds. |
MaxMemMB | String | False | The amount of memory, expressed in MB, to allocate for buffering a single tsidx file into memory before flushing to disk. |
MaxMetaEntries | String | False | Sets the maximum number of unique lines in . data files in a bucket, which may help to reduce memory consumption. |
MaxTimeUnreplicatedNoAcks | String | False | Upper limit, in seconds, on how long an event can sit in raw slice. Applies only if replication is enabled for this index. |
MaxTimeUnreplicatedWithAcks | String | False | Upper limit, in seconds, on how long events can sit unacknowledged in a raw slice. Applies only if you have enabled acks on forwarders and have replication enabled (with clustering). |
MaxTotalDataSizeMB | String | False | The maximum size of an index (in MB). If an index grows larger than the maximum size, the oldest data is frozen. |
MaxWarmDBCount | String | False | The maximum number of warm buckets. If this number is exceeded, the warm bucket/s with the lowest value for their latest times is moved to cold. |
MinRawFileSyncSecs | String | False | Specify an integer (or 'disable') for this parameter. This parameter sets how frequently splunkd forces a filesystem sync while compressing journal slices. |
MinStreamGroupQueueSize | String | False | Minimum size of the queue that stores events in memory before committing them to a tsidx file. |
PartialServiceMetaPeriod | String | False | Related to serviceMetaPeriod. If set, it enables metadata sync every SPECIFIED seconds, but only for records where the sync can be done efficiently in-place, without requiring a full re-write of the metadata file. |
ProcessTrackerServiceInterval | String | False | Specifies, in seconds, how often the indexer checks the status of the child OS processes it launched to see if it can launch new processes for queued requests. If set to 0, the indexer checks child process status every second. |
QuarantineFutureSecs | String | False | Events with timestamp of QuarantineFutureSecs newer than 'now' are dropped into quarantine bucket. |
QuarantinePastSecs | String | False | Events with timestamp of quarantinePastSecs older than 'now' are dropped into quarantine bucket. |
RawChunkSizeBytes | String | False | Target uncompressed size in bytes for individual raw slice in the rawdata journal of the index. 0 is not a valid value. If 0 is specified, rawChunkSizeBytes is set to the default value. |
RepFactor | String | False | Index replication control. This parameter applies to only clustering slaves. auto = Use the master index replication configuration value. 0 = Turn off replication for this index. |
RotatePeriodInSecs | String | False | How frequently (in seconds) to check if a new hot bucket needs to be created. Also, how frequently to check if there are any warm/cold buckets that should be rolled/frozen. |
ServiceMetaPeriod | String | False | Defines how frequently metadata is synced to disk, in seconds. |
SyncMeta | String | False | When true, a sync operation is called before file descriptor is closed on metadata file updates. This functionality improves integrity of metadata files, especially in regards to operating system crashes/machine failures. |
ThawedPath | String | False | An absolute path that contains the thawed (resurrected) databases for the index. Cannot be defined in terms of a volume definition. |
ThrottleCheckPeriod | String | False | Defines how frequently Splunk software checks for index throttling condition, in seconds. |
TstatsHomePath | String | False | Location to store datamodel acceleration TSIDX data for this index. If specified, it must be defined in terms of a volume definition. Path must be writable. |
WarmToColdScript | String | False | Path to a script to run when moving data from warm to cold. |
Name | Type | Description |
AssureUTF8 | Boolean | Boolean value indicating wheter all data retreived from the index is proper UTF8. |
BlockSignSize | Integer | Controls how many events make up a block for block signatures. If this is set to 0, block signing is disabled for this index. A recommended value is 100. |
BlockSignatureDatabase | String | The index that stores block signatures of events. This is a global setting, not a per index setting. |
BucketRebuildMemoryHint | String | Suggestion for the bucket rebuild process for the size of the time-series (tsidx) file to make. |
ColdPath | String | Filepath to the cold databases for the index. |
ColdPathExpanded | String | Absoute filepath to the cold databases. |
ColdToFrozenDir | String | Destination path for the frozen archive. Use as an alternative to a ColdToFrozenScript. |
ColdToFrozenScript | String | Path to the archiving script. |
CurrentDBSizeMB | Integer | Total size, in MB, of data stored in the index. The total incudes data in the home, cold and thawed paths. |
DataType | String | The type of index. |
DefaultDatabase | String | If no index destination information is available in the input data, the index shown here is the destination of such data. |
EnableOnlineBucketRepair | Boolean | Enables asynchronous 'online fsck' bucket repair, which runs concurrently with splunk software. When enabled, you do not have to wait until buckets are repaired to start the splunk platform. |
FrozenTimePeriodInSecs | Integer | Number of seconds after which indexed data rolls to frozen. |
HomePath | String | An absolute path that contains the hot and warm buckets for the index. |
HomePathExpanded | String | An absolute filepath to the hot and warm buckets for the index. |
IndexThreads | String | Number of threads used for indexing. This is a global setting, not a per index setting. |
IsInternal | Boolean | Indicates if this is an internal index. |
IsReady | Boolean | Indicates if an index is properly initialized. |
LastInitTime | Datetime | Last time the index processor was successfully initialized. This is a global setting, not a per index setting. |
MaxBloomBackfillBucketAge | String | If a bucket (warm or cold) is older than this, Splunk software does not create (or re-create) its bloom filter. |
MaxConcurrentOptimizes | Integer | The number of concurrent optimize processes that can run against a hot bucket. |
MaxDataSize | String | The maximum size in MB for a hot DB to reach before a roll to warm is triggered. Specifying 'auto' or 'auto_high_volume' causes Splunk software to autotune this parameter (recommended). |
MaxHotBuckets | String | Maximum hot buckets that can exist per index. |
MaxHotIdleSecs | Integer | Maximum life, in seconds, of a hot bucket. |
MaxHotSpanSecs | Integer | Upper bound of target maximum timespan of hot/warm buckets in seconds. |
MaxMemMB | Integer | The amount of memory, expressed in MB, to allocate for buffering a single tsidx file into memory before flushing to disk. |
MaxMetaEntries | Integer | Sets the maximum number of unique lines in . data files in a bucket, which may help to reduce memory consumption. |
MaxTime | Datetime | ISO8601 timestamp ofThis setting is valid only when tsidxWritingLevel is at 4 or higher. This maximum term limit sets an upper bound on the number of terms kept inside an in-memory hash table that serves to improve tsidx compression. |
MaxTimeUnreplicatedNoAcks | Integer | Upper limit, in seconds, on how long an event can sit in raw slice. Applies only if replication is enabled for this index. |
MaxTimeUnreplicatedWithAcks | Integer | Upper limit, in seconds, on how long events can sit unacknowledged in a raw slice. Applies only if you have enabled acks on forwarders and have replication enabled (with clustering). |
MaxTotalDataSizeMB | Integer | The maximum size of an index (in MB). If an index grows larger than the maximum size, the oldest data is frozen. |
MaxWarmDBCount | String | The maximum number of warm buckets. If this number is exceeded, the warm bucket/s with the lowest value for their latest times is moved to cold. |
MemPoolMB | String | Determines how much memory is given to the indexer memory pool. This is a global setting, not a per-index setting. |
MinRawFileSyncSecs | String | Specify an integer (or 'disable') for this parameter. This parameter sets how frequently splunkd forces a filesystem sync while compressing journal slices. |
MinStreamGroupQueueSize | Integer | Minimum size of the queue that stores events in memory before committing them to a tsidx file. |
MinTime | Datetime | ISO8601 timestamp of the oldest event time in the index. |
PartialServiceMetaPeriod | Integer | Related to serviceMetaPeriod. If set, it enables metadata sync every SPECIFIED seconds, but only for records where the sync can be done efficiently in-place, without requiring a full re-write of the metadata file. |
ProcessTrackerServiceInterval | Integer | Specifies, in seconds, how often the indexer checks the status of the child OS processes it launched to see if it can launch new processes for queued requests. If set to 0, the indexer checks child process status every second. |
QuarantineFutureSecs | Integer | Events with timestamp of QuarantineFutureSecs newer than 'now' are dropped into quarantine bucket. |
QuarantinePastSecs | Integer | Events with timestamp of quarantinePastSecs older than 'now' are dropped into quarantine bucket. |
RawChunkSizeBytes | Integer | Target uncompressed size in bytes for individual raw slice in the rawdata journal of the index. 0 is not a valid value. If 0 is specified, rawChunkSizeBytes is set to the default value. |
RepFactor | String | Index replication control. This parameter applies to only clustering slaves. auto = Use the master index replication configuration value. 0 = Turn off replication for this index. |
RotatePeriodInSecs | Integer | How frequently (in seconds) to check if a new hot bucket needs to be created. Also, how frequently to check if there are any warm/cold buckets that should be rolled/frozen. |
ServiceMetaPeriod | Integer | Defines how frequently metadata is synced to disk, in seconds. |
SuppressBannerList | String | List of indexes for which we suppress 'index missing' warning banner messages. This is a global setting, not a per index setting. |
Sync | String | Specifies the number of events that trigger the indexer to sync events. This is a global setting, not a per index setting. |
SyncMeta | Boolean | When true, a sync operation is called before file descriptor is closed on metadata file updates. This functionality improves integrity of metadata files, especially in regards to operating system crashes/machine failures. |
ThawedPath | String | An absolute path that contains the thawed (resurrected) databases for the index. Cannot be defined in terms of a volume definition. |
ThawedPathExpanded | String | Absolute filepath to the thawed (resurrected) databases. |
ThrottleCheckPeriod | Integer | Defines how frequently Splunk software checks for index throttling condition, in seconds. |
TotalEventCount | Integer | Total number of events in the index. |
TsidxDedupPostingsListMaxTermsLimit | Integer | This setting is valid only when tsidxWritingLevel is at 4 or higher. This maximum term limit sets an upper bound on the number of terms kept inside an in-memory hash table that serves to improve tsidx compression. |
TstatsHomePath | String | Location to store datamodel acceleration TSIDX data for this index. If specified, it must be defined in terms of a volume definition. Path must be writable. |
WarmToColdScript | String | Path to a script to run when moving data from warm to cold. |
Success | Boolean | Boolean indicating whether the stored procedure was executed successfully. |
ErrorCode | Integer | The error code in case the procedure is not executed successfully. |
ErrorMessage | String | The error message in case the procedure is not executed successfully. |
Create a saved search.
Name | Type | Required | Description |
Name | String | True | A name for the search |
Search | String | True | The search query to save |
Description | String | False | Description of this saved search. |
CronSchedule | String | False | The cron schedule to execute this search. For example: */5 * * * * causes the search to execute every 5 minutes. |
Disabled | Boolean | False | Indicates if this saved search is disabled. |
IsScheduled | Boolean | False | Indicates if this search is to be run on a schedule. |
IsVisible | Boolean | False | Indicates if this saved search appears in the visible saved search list. |
RealTimeSchedule | Boolean | False | If this value is set to 1, the scheduler bases its determination of the next scheduled search execution time on the current time. If this value is set to 0, it is determined based on the last search execution time. |
RunOnStartup | Boolean | False | Indicates whether this search runs on startup. If it does not run on startup, it runs at the next scheduled time. |
SchedulePriority | String | False | Indicates the scheduling priority of a specific search.
使用できる値は次のとおりです。default, higher, highest |
UserContext | String | False | If user context is provided, servicesNS node will be used (/servicesNS/[UserContext]/search), otherwise it defaults to the general endpoint /services. |
Name | Type | Description |
Success | Boolean | Returns the success status of the created saved search. |
Message | String | Warnings from the server. |
Delete a data index.
Name | Type | Required | Description |
Name | String | True | The name of the index to delete. |
Name | Type | Description |
Success | Boolean | Boolean indicating whether the stored procedure was executed successfully. |
ErrorCode | Integer | The error code in case the procedure is not executed successfully. |
ErrorMessage | String | The error code in case the procedure is not executed successfully. |
Update a data index.
Name | Type | Required | Description |
Name | String | True | The name of the index to update. |
BlockSignSize | String | False | Controls how many events make up a block for block signatures. If this is set to 0, block signing is disabled for this index. A recommended value is 100. |
BucketRebuildMemoryHint | String | False | Suggestion for the bucket rebuild process for the size of the time-series (tsidx) file to make. Default value, auto, varies by the amount of physical RAM on the host. |
ColdToFrozenDir | String | False | Destination path for the frozen archive. Use as an alternative to a ColdToFrozenScript. |
ColdToFrozenScript | String | False | Path to the archiving script. |
EnableOnlineBucketRepair | String | False | Enables asynchronous 'online fsck' bucket repair, which runs concurrently with splunk software. When enabled, you do not have to wait until buckets are repaired to start the splunk platform. |
FrozenTimePeriodInSecs | String | False | Number of seconds after which indexed data rolls to frozen. Defaults to 188697600 (6 years). |
MaxBloomBackfillBucketAge | String | False | Valid values are: integer[m|s|h|d] if a warm or cold bucket is older than the specified age, do not create or rebuild its bloomfilter. Specify 0 to never rebuild bloomfilters. |
MaxConcurrentOptimizes | String | False | The number of concurrent optimize processes that can run against a hot bucket. |
MaxDataSize | String | False | The maximum size in MB for a hot DB to reach before a roll to warm is triggered. Specifying 'auto' or 'auto_high_volume' causes Splunk software to autotune this parameter (recommended). |
MaxHotBuckets | String | False | Maximum hot buckets that can exist per index. |
MaxHotIdleSecs | String | False | Maximum life, in seconds, of a hot bucket. |
MaxHotSpanSecs | String | False | Upper bound of target maximum timespan of hot/warm buckets in seconds. |
MaxMemMB | String | False | The amount of memory, expressed in MB, to allocate for buffering a single tsidx file into memory before flushing to disk. |
MaxMetaEntries | String | False | Sets the maximum number of unique lines in . data files in a bucket, which may help to reduce memory consumption. |
MaxTimeUnreplicatedNoAcks | String | False | Upper limit, in seconds, on how long an event can sit in raw slice. Applies only if replication is enabled for this index. |
MaxTimeUnreplicatedWithAcks | String | False | Upper limit, in seconds, on how long events can sit unacknowledged in a raw slice. Applies only if you have enabled acks on forwarders and have replication enabled (with clustering). |
MaxTotalDataSizeMB | String | False | The maximum size of an index (in MB). If an index grows larger than the maximum size, the oldest data is frozen. |
MaxWarmDBCount | String | False | The maximum number of warm buckets. If this number is exceeded, the warm bucket/s with the lowest value for their latest times is moved to cold. |
MinRawFileSyncSecs | String | False | Specify an integer (or 'disable') for this parameter. This parameter sets how frequently splunkd forces a filesystem sync while compressing journal slices. |
MinStreamGroupQueueSize | String | False | Minimum size of the queue that stores events in memory before committing them to a tsidx file. |
PartialServiceMetaPeriod | String | False | Related to serviceMetaPeriod. If set, it enables metadata sync every n seconds, but only for records where the sync can be done efficiently in-place, without requiring a full re-write of the metadata file. |
ProcessTrackerServiceInterval | String | False | Specifies, in seconds, how often the indexer checks the status of the child OS processes it launched to see if it can launch new processes for queued requests. If set to 0, the indexer checks child process status every second. |
QuarantineFutureSecs | String | False | Events with timestamp of QuarantineFutureSecs newer than 'now' are dropped into quarantine bucket. |
QuarantinePastSecs | String | False | Events with timestamp of quarantinePastSecs older than 'now' are dropped into quarantine bucket. |
RawChunkSizeBytes | String | False | Target uncompressed size in bytes for individual raw slice in the rawdata journal of the index. 0 is not a valid value. If 0 is specified, rawChunkSizeBytes is set to the default value. |
RepFactor | String | False | Index replication control. This parameter applies to only clustering slaves. auto = Use the master index replication configuration value. 0 = Turn off replication for this index. |
RotatePeriodInSecs | String | False | How frequently (in seconds) to check if a new hot bucket needs to be created. Also, how frequently to check if there are any warm/cold buckets that should be rolled/frozen. |
ServiceMetaPeriod | String | False | Defines how frequently metadata is synced to disk, in seconds. |
SyncMeta | String | False | When true, a sync operation is called before file descriptor is closed on metadata file updates. This functionality improves integrity of metadata files, especially in regards to operating system crashes/machine failures. |
ThrottleCheckPeriod | String | False | Defines how frequently Splunk software checks for index throttling condition, in seconds. |
TstatsHomePath | String | False | Location to store datamodel acceleration TSIDX data for this index. If specified, it must be defined in terms of a volume definition. Path must be writable. |
WarmToColdScript | String | False | Path to a script to run when moving data from warm to cold. |
Name | Type | Description |
AssureUTF8 | Boolean | Boolean value indicating wheter all data retreived from the index is proper UTF8. |
BlockSignSize | Integer | Controls how many events make up a block for block signatures. If this is set to 0, block signing is disabled for this index. A recommended value is 100. |
BlockSignatureDatabase | String | The index that stores block signatures of events. This is a global setting, not a per index setting. |
BucketRebuildMemoryHint | String | Suggestion for the bucket rebuild process for the size of the time-series (tsidx) file to make. |
ColdPath | String | Filepath to the cold databases for the index. |
ColdPathExpanded | String | Absoute filepath to the cold databases. |
ColdToFrozenDir | String | Destination path for the frozen archive. Use as an alternative to a ColdToFrozenScript. |
ColdToFrozenScript | String | Path to the archiving script. |
CurrentDBSizeMB | Integer | Total size, in MB, of data stored in the index. The total incudes data in the home, cold and thawed paths. |
DefaultDatabase | String | If no index destination information is available in the input data, the index shown here is the destination of such data. |
EnableOnlineBucketRepair | Boolean | Enables asynchronous 'online fsck' bucket repair, which runs concurrently with splunk software. When enabled, you do not have to wait until buckets are repaired to start the splunk platform. |
EnableRealtimeSearch | Boolean | Indicates if this is a real-time search. This is a global setting, not a per index setting. |
FrozenTimePeriodInSecs | Integer | Number of seconds after which indexed data rolls to frozen. |
HomePath | String | An absolute path that contains the hot and warm buckets for the index. |
HomePathExpanded | String | An absolute filepath to the hot and warm buckets for the index. |
IndexThreads | String | Number of threads used for indexing. This is a global setting, not a per index setting. |
IsInternal | Boolean | Indicates if this is an internal index. |
LastInitTime | Datetime | Last time the index processor was successfully initialized. This is a global setting, not a per index setting. |
MaxBloomBackfillBucketAge | String | If a bucket (warm or cold) is older than this, Splunk software does not create (or re-create) its bloom filter. |
MaxConcurrentOptimizes | Integer | The number of concurrent optimize processes that can run against a hot bucket. |
MaxDataSize | String | The maximum size in MB for a hot DB to reach before a roll to warm is triggered. Specifying 'auto' or 'auto_high_volume' causes Splunk software to autotune this parameter (recommended). |
MaxHotBuckets | String | Maximum hot buckets that can exist per index. |
MaxHotIdleSecs | Integer | Maximum life, in seconds, of a hot bucket. |
MaxHotSpanSecs | Integer | Upper bound of target maximum timespan of hot/warm buckets in seconds. |
MaxMemMB | Integer | The amount of memory, expressed in MB, to allocate for buffering a single tsidx file into memory before flushing to disk. |
MaxMetaEntries | Integer | Sets the maximum number of unique lines in . data files in a bucket, which may help to reduce memory consumption. |
MaxTime | Datetime | ISO8601 timestamp ofThis setting is valid only when tsidxWritingLevel is at 4 or higher. This maximum term limit sets an upper bound on the number of terms kept inside an in-memory hash table that serves to improve tsidx compression. |
MaxTimeUnreplicatedNoAcks | Integer | Upper limit, in seconds, on how long an event can sit in raw slice. Applies only if replication is enabled for this index. |
MaxTimeUnreplicatedWithAcks | Integer | Upper limit, in seconds, on how long events can sit unacknowledged in a raw slice. Applies only if you have enabled acks on forwarders and have replication enabled (with clustering). |
MaxTotalDataSizeMB | Integer | The maximum size of an index (in MB). If an index grows larger than the maximum size, the oldest data is frozen. |
MaxWarmDBCount | String | The maximum number of warm buckets. If this number is exceeded, the warm bucket/s with the lowest value for their latest times is moved to cold. |
MemPoolMB | String | Determines how much memory is given to the indexer memory pool. This is a global setting, not a per-index setting. |
MinRawFileSyncSecs | String | Specify an integer (or 'disable') for this parameter. This parameter sets how frequently splunkd forces a filesystem sync while compressing journal slices. |
MinStreamGroupQueueSize | Integer | Minimum size of the queue that stores events in memory before committing them to a tsidx file. |
MinTime | Datetime | ISO8601 timestamp of the oldest event time in the index. |
PartialServiceMetaPeriod | Integer | Related to serviceMetaPeriod. If set, it enables metadata sync every SPECIFIED seconds, but only for records where the sync can be done efficiently in-place, without requiring a full re-write of the metadata file. |
ProcessTrackerServiceInterval | Integer | Specifies, in seconds, how often the indexer checks the status of the child OS processes it launched to see if it can launch new processes for queued requests. If set to 0, the indexer checks child process status every second. |
QuarantineFutureSecs | Integer | Events with timestamp of QuarantineFutureSecs newer than 'now' are dropped into quarantine bucket. |
QuarantinePastSecs | Integer | Events with timestamp of quarantinePastSecs older than 'now' are dropped into quarantine bucket. |
RawChunkSizeBytes | Integer | Target uncompressed size in bytes for individual raw slice in the rawdata journal of the index. 0 is not a valid value. If 0 is specified, rawChunkSizeBytes is set to the default value. |
RepFactor | String | Index replication control. This parameter applies to only clustering slaves. auto = Use the master index replication configuration value. 0 = Turn off replication for this index. |
RotatePeriodInSecs | Integer | How frequently (in seconds) to check if a new hot bucket needs to be created. Also, how frequently to check if there are any warm/cold buckets that should be rolled/frozen. |
ServiceMetaPeriod | Integer | Defines how frequently metadata is synced to disk, in seconds. |
SuppressBannerList | String | List of indexes for which we suppress 'index missing' warning banner messages. This is a global setting, not a per index setting. |
Sync | String | Specifies the number of events that trigger the indexer to sync events. This is a global setting, not a per index setting. |
SyncMeta | Boolean | When true, a sync operation is called before file descriptor is closed on metadata file updates. This functionality improves integrity of metadata files, especially in regards to operating system crashes/machine failures. |
ThawedPath | String | An absolute path that contains the thawed (resurrected) databases for the index. Cannot be defined in terms of a volume definition. |
ThawedPathExpanded | String | Absolute filepath to the thawed (resurrected) databases. |
ThrottleCheckPeriod | Integer | Defines how frequently Splunk software checks for index throttling condition, in seconds. |
TotalEventCount | Integer | Total number of events in the index. |
TsidxDedupPostingsListMaxTermsLimit | Integer | This setting is valid only when tsidxWritingLevel is at 4 or higher. This maximum term limit sets an upper bound on the number of terms kept inside an in-memory hash table that serves to improve tsidx compression. |
TstatsHomePath | String | Location to store datamodel acceleration TSIDX data for this index. If specified, it must be defined in terms of a volume definition. Path must be writable. |
WarmToColdScript | String | Path to a script to run when moving data from warm to cold. |
Success | Boolean | Boolean indicating whether the stored procedure was executed successfully. |
ErrorCode | Integer | The error code in case the procedure is not executed successfully. |
ErrorMessage | String | The error message in case the procedure is not executed successfully. |
Update a saved search.
Name | Type | Required | Description |
Name | String | True | A name for the search |
Search | String | False | The search query to save |
Description | String | False | Description of this saved search. |
CronSchedule | String | False | The cron schedule to execute this search. For example: */5 * * * * causes the search to execute every 5 minutes. |
Disabled | Boolean | False | Indicates if this saved search is disabled. |
IsScheduled | Boolean | False | Indicates if this search is to be run on a schedule. |
IsVisible | Boolean | False | Indicates if this saved search appears in the visible saved search list. |
RealTimeSchedule | Boolean | False | If this value is set to 1, the scheduler bases its determination of the next scheduled search execution time on the current time. If this value is set to 0, it is determined based on the last search execution time. |
RunOnStartup | Boolean | False | Indicates whether this search runs on startup. If it does not run on startup, it runs at the next scheduled time. |
SchedulePriority | String | False | Indicates the scheduling priority of a specific search.
使用できる値は次のとおりです。default, higher, highest |
Name | Type | Description |
Success | Boolean | Returns the success status of the created saved search. |
Message | String | Warnings from the server. |
このセクションで説明されているシステムテーブルをクエリして、スキーマ情報、データソース機能に関する情報、およびバッチ操作の統計にアクセスできます。
以下のテーブルは、Splunk のデータベースメタデータを返します。
以下のテーブルは、データソースへの接続方法およびクエリ方法についての情報を返します。
次のテーブルは、データ変更クエリのクエリ統計を返します。
利用可能なデータベースをリストします。
次のクエリは、接続文字列で決定されるすべてのデータベースを取得します。
SELECT * FROM sys_catalogs
Name | Type | Description |
CatalogName | String | データベース名。 |
利用可能なスキーマをリストします。
次のクエリは、すべての利用可能なスキーマを取得します。
SELECT * FROM sys_schemas
Name | Type | Description |
CatalogName | String | データベース名。 |
SchemaName | String | スキーマ名。 |
利用可能なテーブルをリストします。
次のクエリは、利用可能なテーブルおよびビューを取得します。
SELECT * FROM sys_tables
Name | Type | Description |
CatalogName | String | テーブルまたはビューを含むデータベース。 |
SchemaName | String | テーブルまたはビューを含むスキーマ。 |
TableName | String | テーブル名またはビュー名。 |
TableType | String | テーブルの種類(テーブルまたはビュー)。 |
Description | String | テーブルまたはビューの説明。 |
IsUpdateable | Boolean | テーブルが更新可能かどうか。 |
利用可能なテーブルおよびビューのカラムについて説明します。
次のクエリは、DataModels テーブルのカラムとデータ型を返します。
SELECT ColumnName, DataTypeName FROM sys_tablecolumns WHERE TableName='DataModels'
Name | Type | Description |
CatalogName | String | テーブルまたはビューを含むデータベースの名前。 |
SchemaName | String | テーブルまたはビューを含むスキーマ。 |
TableName | String | カラムを含むテーブルまたはビューの名前。 |
ColumnName | String | カラム名。 |
DataTypeName | String | データ型の名前。 |
DataType | Int32 | データ型を示す整数値。この値は、実行時に環境に基づいて決定されます。 |
Length | Int32 | カラムのストレージサイズ。 |
DisplaySize | Int32 | 指定されたカラムの通常の最大幅(文字数)。 |
NumericPrecision | Int32 | 数値データの最大桁数。文字データおよび日時データの場合は、カラムの長さ(文字数)。 |
NumericScale | Int32 | カラムのスケール(小数点以下の桁数)。 |
IsNullable | Boolean | カラムがNull を含められるかどうか。 |
Description | String | カラムの簡単な説明。 |
Ordinal | Int32 | カラムのシーケンスナンバー。 |
IsAutoIncrement | String | カラムに固定増分値が割り当てられるかどうか。 |
IsGeneratedColumn | String | 生成されたカラムであるかどうか。 |
IsHidden | Boolean | カラムが非表示かどうか。 |
IsArray | Boolean | カラムが配列かどうか。 |
IsReadOnly | Boolean | カラムが読み取り専用かどうか。 |
IsKey | Boolean | sys_tablecolumns から返されたフィールドがテーブルの主キーであるかどうか。 |
利用可能なストアドプロシージャをリストします。
次のクエリは、利用可能なストアドプロシージャを取得します。
SELECT * FROM sys_procedures
Name | Type | Description |
CatalogName | String | ストアドプロシージャを含むデータベース。 |
SchemaName | String | ストアドプロシージャを含むスキーマ。 |
ProcedureName | String | ストアドプロシージャの名前。 |
Description | String | ストアドプロシージャの説明。 |
ProcedureType | String | PROCEDURE やFUNCTION などのプロシージャのタイプ。 |
ストアドプロシージャパラメータについて説明します。
次のクエリは、SelectEntries ストアドプロシージャのすべての入力パラメータについての情報を返します。
SELECT * FROM sys_procedureparameters WHERE ProcedureName='SelectEntries' AND Direction=1 OR Direction=2
Name | Type | Description |
CatalogName | String | ストアドプロシージャを含むデータベースの名前。 |
SchemaName | String | ストアドプロシージャを含むスキーマの名前。 |
ProcedureName | String | パラメータを含むストアドプロシージャの名前。 |
ColumnName | String | ストアドプロシージャパラメータの名前。 |
Direction | Int32 | パラメータのタイプに対応する整数値:input (1)。input/output (2)、またはoutput(4)。input/output タイプパラメータは、入力パラメータと出力パラメータの両方になれます。 |
DataTypeName | String | データ型の名前。 |
DataType | Int32 | データ型を示す整数値。この値は、実行時に環境に基づいて決定されます。 |
Length | Int32 | 文字データの場合は、許可される文字数。数値データの場合は、許可される桁数。 |
NumericPrecision | Int32 | 数値データの場合は最大精度。文字データおよび日時データの場合は、カラムの長さ(文字数)。 |
NumericScale | Int32 | 数値データの小数点以下の桁数。 |
IsNullable | Boolean | パラメータがNull を含められるかどうか。 |
IsRequired | Boolean | プロシージャの実行にパラメータが必要かどうか。 |
IsArray | Boolean | パラメータが配列かどうか。 |
Description | String | パラメータの説明。 |
Ordinal | Int32 | パラメータのインデックス。 |
主キーおよび外部キーについて説明します。
次のクエリは、DataModels テーブルの主キーを取得します。
SELECT * FROM sys_keycolumns WHERE IsKey='True' AND TableName='DataModels'
Name | Type | Description |
CatalogName | String | キーを含むデータベースの名前。 |
SchemaName | String | キーを含むスキーマの名前。 |
TableName | String | キーを含むテーブルの名前。 |
ColumnName | String | キーカラムの名前 |
IsKey | Boolean | カラムがTableName フィールドで参照されるテーブル内の主キーかどうか。 |
IsForeignKey | Boolean | カラムがTableName フィールドで参照される外部キーかどうか。 |
PrimaryKeyName | String | 主キーの名前。 |
ForeignKeyName | String | 外部キーの名前。 |
ReferencedCatalogName | String | 主キーを含むデータベース。 |
ReferencedSchemaName | String | 主キーを含むスキーマ。 |
ReferencedTableName | String | 主キーを含むテーブル。 |
ReferencedColumnName | String | 主キーのカラム名。 |
外部キーについて説明します。
次のクエリは、他のテーブルを参照するすべての外部キーを取得します。
SELECT * FROM sys_foreignkeys WHERE ForeignKeyType = 'FOREIGNKEY_TYPE_IMPORT'
名前 | タイプ | 説明 |
CatalogName | String | キーを含むデータベースの名前。 |
SchemaName | String | キーを含むスキーマの名前。 |
TableName | String | キーを含むテーブルの名前。 |
ColumnName | String | キーカラムの名前 |
PrimaryKeyName | String | 主キーの名前。 |
ForeignKeyName | String | 外部キーの名前。 |
ReferencedCatalogName | String | 主キーを含むデータベース。 |
ReferencedSchemaName | String | 主キーを含むスキーマ。 |
ReferencedTableName | String | 主キーを含むテーブル。 |
ReferencedColumnName | String | 主キーのカラム名。 |
ForeignKeyType | String | 外部キーがインポート(他のテーブルを指す)キーかエクスポート(他のテーブルから参照される)キーかを指定します。 |
主キーについて説明します。
次のクエリは、すべてのテーブルとビューから主キーを取得します。
SELECT * FROM sys_primarykeys
Name | Type | Description |
CatalogName | String | キーを含むデータベースの名前。 |
SchemaName | String | キーを含むスキーマの名前。 |
TableName | String | キーを含むテーブルの名前。 |
ColumnName | String | キーカラムの名前。 |
KeySeq | String | 主キーのシーケンス番号。 |
KeyName | String | 主キーの名前。 |
利用可能なインデックスについて説明します。インデックスをフィルタリングすることで、より高速なクエリ応答時間でセレクティブクエリを記述できます。
次のクエリは、主キーでないすべてのインデックスを取得します。
SELECT * FROM sys_indexes WHERE IsPrimary='false'
Name | Type | Description |
CatalogName | String | インデックスを含むデータベースの名前。 |
SchemaName | String | インデックスを含むスキーマの名前。 |
TableName | String | インデックスを含むテーブルの名前。 |
IndexName | String | インデックス名。 |
ColumnName | String | インデックスに関連付けられたカラムの名前。 |
IsUnique | Boolean | インデックスが固有の場合はTrue。そうでない場合はFalse。 |
IsPrimary | Boolean | インデックスが主キーの場合はTrue。そうでない場合はFalse。 |
Type | Int16 | インデックスタイプに対応する整数値:statistic (0)、clustered (1)、hashed (2)、またはother (3)。 |
SortOrder | String | 並べ替え順序:A が昇順、D が降順。 |
OrdinalPosition | Int16 | インデックスのカラムのシーケンスナンバー。 |
利用可能な接続プロパティと、接続文字列に設定されている接続プロパティに関する情報を返します。
このテーブルをクエリする際は、config 接続文字列を使用する必要があります。
jdbc:cdata:splunk:config:
この接続文字列を使用すると、有効な接続がなくてもこのテーブルをクエリできます。
次のクエリは、接続文字列に設定されている、あるいはデフォルト値で設定されているすべての接続プロパティを取得します。
SELECT * FROM sys_connection_props WHERE Value <> ''
Name | Type | Description |
Name | String | 接続プロパティ名。 |
ShortDescription | String | 簡単な説明。 |
Type | String | 接続プロパティのデータ型。 |
Default | String | 明示的に設定されていない場合のデフォルト値。 |
Values | String | 可能な値のカンマ区切りリスト。別な値が指定されていると、検証エラーがスローされます。 |
Value | String | 設定した値またはあらかじめ設定されたデフォルト。 |
Required | Boolean | プロパティが接続に必要かどうか。 |
Category | String | 接続プロパティのカテゴリ。 |
IsSessionProperty | String | プロパティが、現在の接続に関する情報を保存するために使用されるセッションプロパティかどうか。 |
Sensitivity | String | プロパティの機密度。これは、プロパティがロギングおよび認証フォームで難読化されているかどうかを通知します。 |
PropertyName | String | キャメルケースの短縮形の接続プロパティ名。 |
Ordinal | Int32 | パラメータのインデックス。 |
CatOrdinal | Int32 | パラメータカテゴリのインデックス。 |
Hierarchy | String | このプロパティと一緒に設定する必要がある、関連のある依存プロパティを表示します。 |
Visible | Boolean | プロパティが接続UI に表示されるかどうかを通知します。 |
ETC | String | プロパティに関するその他のさまざまな情報。 |
Cloud がデータソースにオフロードできるSELECT クエリ処理について説明します。
SQL 構文の詳細については、SQL 準拠 を参照してください。
以下はSQL 機能のサンプルデータセットです。 SELECT 機能のいくつかの側面がサポートされている場合には、カンマ区切りのリストで返されます。サポートされていない場合、カラムにはNO が入ります。
名前 | 説明 | 有効な値 |
AGGREGATE_FUNCTIONS | サポートされている集計関数。 | AVG, COUNT, MAX, MIN, SUM, DISTINCT |
COUNT | COUNT 関数がサポートされているかどうか。 | YES, NO |
IDENTIFIER_QUOTE_OPEN_CHAR | 識別子をエスケープするための開始文字。 | [ |
IDENTIFIER_QUOTE_CLOSE_CHAR | 識別子をエスケープするための終了文字。 | ] |
SUPPORTED_OPERATORS | サポートされているSQL 演算子。 | =, >, <, >=, <=, <>, !=, LIKE, NOT LIKE, IN, NOT IN, IS NULL, IS NOT NULL, AND, OR |
GROUP_BY | GROUP BY がサポートされているかどうか。サポートされている場合、どのレベルでサポートされているか。 | NO, NO_RELATION, EQUALS_SELECT, SQL_GB_COLLATE |
OJ_CAPABILITIES | サポートされている外部結合の種類。 | NO, LEFT, RIGHT, FULL, INNER, NOT_ORDERED, ALL_COMPARISON_OPS |
OUTER_JOINS | 外部結合がサポートされているかどうか。 | YES, NO |
SUBQUERIES | サブクエリがサポートされているかどうか。サポートされていれば、どのレベルでサポートされているか。 | NO, COMPARISON, EXISTS, IN, CORRELATED_SUBQUERIES, QUANTIFIED |
STRING_FUNCTIONS | サポートされている文字列関数。 | LENGTH, CHAR, LOCATE, REPLACE, SUBSTRING, RTRIM, LTRIM, RIGHT, LEFT, UCASE, SPACE, SOUNDEX, LCASE, CONCAT, ASCII, REPEAT, OCTET, BIT, POSITION, INSERT, TRIM, UPPER, REGEXP, LOWER, DIFFERENCE, CHARACTER, SUBSTR, STR, REVERSE, PLAN, UUIDTOSTR, TRANSLATE, TRAILING, TO, STUFF, STRTOUUID, STRING, SPLIT, SORTKEY, SIMILAR, REPLICATE, PATINDEX, LPAD, LEN, LEADING, KEY, INSTR, INSERTSTR, HTML, GRAPHICAL, CONVERT, COLLATION, CHARINDEX, BYTE |
NUMERIC_FUNCTIONS | サポートされている数値関数。 | ABS, ACOS, ASIN, ATAN, ATAN2, CEILING, COS, COT, EXP, FLOOR, LOG, MOD, SIGN, SIN, SQRT, TAN, PI, RAND, DEGREES, LOG10, POWER, RADIANS, ROUND, TRUNCATE |
TIMEDATE_FUNCTIONS | サポートされている日付および時刻関数。 | NOW, CURDATE, DAYOFMONTH, DAYOFWEEK, DAYOFYEAR, MONTH, QUARTER, WEEK, YEAR, CURTIME, HOUR, MINUTE, SECOND, TIMESTAMPADD, TIMESTAMPDIFF, DAYNAME, MONTHNAME, CURRENT_DATE, CURRENT_TIME, CURRENT_TIMESTAMP, EXTRACT |
REPLICATION_SKIP_TABLES | レプリケーション中にスキップされたテーブルを示します。 | |
REPLICATION_TIMECHECK_COLUMNS | レプリケーション中に更新判断のカラムとして使用するかどうかを、(指定された順に)チェックするカラムのリストを含む文字列の配列。 | |
IDENTIFIER_PATTERN | 識別子としてどの文字列が有効かを示す文字列値。 | |
SUPPORT_TRANSACTION | プロバイダーが、コミットやロールバックなどのトランザクションをサポートしているかどうかを示します。 | YES, NO |
DIALECT | 使用するSQL ダイアレクトを示します。 | |
KEY_PROPERTIES | Uniform データベースを特定するプロパティを示します。 | |
SUPPORTS_MULTIPLE_SCHEMAS | プロバイダー用に複数のスキームが存在するかどうかを示します。 | YES, NO |
SUPPORTS_MULTIPLE_CATALOGS | プロバイダー用に複数のカタログが存在するかどうかを示します。 | YES, NO |
DATASYNCVERSION | このドライバーにアクセスするために必要な、CData Sync のバージョン。 | Standard, Starter, Professional, Enterprise |
DATASYNCCATEGORY | このドライバーのCData Sync カテゴリ。 | Source, Destination, Cloud Destination |
SUPPORTSENHANCEDSQL | API で提供されている以上の、追加SQL 機能がサポートされているかどうか。 | TRUE, FALSE |
SUPPORTS_BATCH_OPERATIONS | バッチ操作がサポートされているかどうか。 | YES, NO |
SQL_CAP | このドライバーでサポートされているすべてのSQL 機能。 | SELECT, INSERT, DELETE, UPDATE, TRANSACTIONS, ORDERBY, OAUTH, ASSIGNEDID, LIMIT, LIKE, BULKINSERT, COUNT, BULKDELETE, BULKUPDATE, GROUPBY, HAVING, AGGS, OFFSET, REPLICATE, COUNTDISTINCT, JOINS, DROP, CREATE, DISTINCT, INNERJOINS, SUBQUERIES, ALTER, MULTIPLESCHEMAS, GROUPBYNORELATION, OUTERJOINS, UNIONALL, UNION, UPSERT, GETDELETED, CROSSJOINS, GROUPBYCOLLATE, MULTIPLECATS, FULLOUTERJOIN, MERGE, JSONEXTRACT, BULKUPSERT, SUM, SUBQUERIESFULL, MIN, MAX, JOINSFULL, XMLEXTRACT, AVG, MULTISTATEMENTS, FOREIGNKEYS, CASE, LEFTJOINS, COMMAJOINS, WITH, LITERALS, RENAME, NESTEDTABLES, EXECUTE, BATCH, BASIC, INDEX |
PREFERRED_CACHE_OPTIONS | 使用したいcacheOptions を指定する文字列値。 | |
ENABLE_EF_ADVANCED_QUERY | ドライバーがEntity Framework の高度なクエリをサポートしているかどうかを示します。サポートしていなければ、クエリはクライアントサイドで処理されます。 | YES, NO |
PSEUDO_COLUMNS | 利用可能な疑似カラムを示す文字列の配列。 | |
MERGE_ALWAYS | 値がtrue であれば、CData Sync 内でMerge Model が強制的に実行されます。 | TRUE, FALSE |
REPLICATION_MIN_DATE_QUERY | レプリケート開始日時を返すSELECT クエリ。 | |
REPLICATION_MIN_FUNCTION | サーバーサイドでmin を実行するために使用する式名を、プロバイダーが指定できるようになります。 | |
REPLICATION_START_DATE | レプリケート開始日を、プロバイダーが指定できるようになります。 | |
REPLICATION_MAX_DATE_QUERY | レプリケート終了日時を返すSELECT クエリ。 | |
REPLICATION_MAX_FUNCTION | サーバーサイドでmax を実行するために使用する式名を、プロバイダーが指定できるようになります。 | |
IGNORE_INTERVALS_ON_INITIAL_REPLICATE | 初回のレプリケートで、レプリケートをチャンクに分割しないテーブルのリスト。 | |
CHECKCACHE_USE_PARENTID | CheckCache 構文を親キーカラムに対して実行するかどうかを示します。 | TRUE, FALSE |
CREATE_SCHEMA_PROCEDURES | スキーマファイルの生成に使用できる、ストアドプロシージャを示します。 |
次のクエリは、WHERE 句で使用できる演算子を取得します。
SELECT * FROM sys_sqlinfo WHERE Name = 'SUPPORTED_OPERATORS'
WHERE 句では、個々のテーブルの制限や要件が異なる場合がありますので注意してください。詳しくは、データモデル セクションを参照してください。
Name | Type | Description |
NAME | String | SQL 構文のコンポーネント、またはサーバー上で処理できる機能。 |
VALUE | String | サポートされるSQL またはSQL 構文の詳細。 |
試行された変更に関する情報を返します。
次のクエリは、バッチ処理で変更された行のId を取得します。
SELECT * FROM sys_identity
Name | Type | Description |
Id | String | データ変更処理から返された、データベース生成Id。 |
Batch | String | バッチの識別子。1 は単一処理。 |
Operation | String | バッチ内の処理の結果:INSERTED、UPDATED、またはDELETED。 |
Message | String | SUCCESS、またはバッチ内の更新が失敗した場合のエラーメッセージ。 |
プロパティ | 説明 |
AuthScheme | Whether to use Basic Authentication, AccessToken or HTTPEventCollectorToken Authentication when connecting to Splunk |
AccessToken | The Access Token used for accessing your Splunk account. |
HTTPEventCollectorToken | The HTTP Event Collector Token is used for accessing the HTTP Event Controller feature on your Splunk account. |
URL | The URL to your Splunk endpoint. |
User | 認証で使用されるSplunk ユーザーアカウント。 |
Password | ユーザーの認証で使用されるパスワード。 |
プロパティ | 説明 |
SSLServerCert | TLS/SSL を使用して接続するときに、サーバーが受け入れ可能な証明書。 |
プロパティ | 説明 |
Verbosity | ログファイルの記述をどの程度の詳細さで記載するかを決定するverbosity レベル。 |
プロパティ | 説明 |
BrowsableSchemas | このプロパティは、使用可能なスキーマのサブセットにレポートされるスキーマを制限します。例えば、BrowsableSchemas=SchemaA,SchemaB,SchemaC です。 |
プロパティ | 説明 |
IncludeInternalFields | Whether or not the CData ADO.NET Provider for Splunk should push the internal fields. These fields include: user, eventtype, etc. |
MaxRows | クエリで集計またはGROUP BY を使用しない場合に返される行数を制限します。これはLIMIT 句よりも優先されます。 |
MaxThreads | Specifies the number of concurrent requests. Only used when UseJobs is true. |
Pagesize | Splunk から返されるページあたりの結果の最大数。 |
PseudoColumns | このプロパティは、テーブルのカラムとして疑似カラムが含まれているかどうかを示します。 |
RowScanDepth | Set this property to control the number of rows scanned when TypeDetectionScheme is set to RowScan. |
Timeout | タイムアウトエラーがスローされ、処理をキャンセルするまでの秒数。 |
TypeDetectionScheme | Determines how to determine the data type of columns. |
UseJobs | Specifies whether to use the jobs endpoint instead of the export endpoint. If set to true, the maximum number of returned rows is configured Splunk's limit.conf file. |
このセクションでは、本プロバイダーの接続文字列で設定可能なAuthentication プロパティの全リストを提供します。
プロパティ | 説明 |
AuthScheme | Whether to use Basic Authentication, AccessToken or HTTPEventCollectorToken Authentication when connecting to Splunk |
AccessToken | The Access Token used for accessing your Splunk account. |
HTTPEventCollectorToken | The HTTP Event Collector Token is used for accessing the HTTP Event Controller feature on your Splunk account. |
URL | The URL to your Splunk endpoint. |
User | 認証で使用されるSplunk ユーザーアカウント。 |
Password | ユーザーの認証で使用されるパスワード。 |
Whether to use Basic Authentication, AccessToken or HTTPEventCollectorToken Authentication when connecting to Splunk
string
"Basic"
The Access Token used for accessing your Splunk account.
string
""
The Access Token used for accessing your Splunk account.
The HTTP Event Collector Token is used for accessing the HTTP Event Controller feature on your Splunk account.
string
""
The HTTP Event Collector Token is used for accessing the HTTP Event Controller feature on your Splunk account.
The URL to your Splunk endpoint.
string
""
The URL to your Splunk endpoint; for example, https://yoursitename.splunk.com:8089.
The port should be set to the Splunk management port (default 8089).
認証で使用されるSplunk ユーザーアカウント。
string
""
このフィールドは、Password とともに、Splunk サーバーに対して認証をするために使われます。
このセクションでは、本プロバイダーの接続文字列で設定可能なSSL プロパティの全リストを提供します。
プロパティ | 説明 |
SSLServerCert | TLS/SSL を使用して接続するときに、サーバーが受け入れ可能な証明書。 |
TLS/SSL を使用して接続するときに、サーバーが受け入れ可能な証明書。
string
""
TLS/SSL 接続を使用する場合は、このプロパティを使用して、サーバーが受け入れるTLS/SSL 証明書を指定できます。コンピュータによって信頼されていない他の証明書はすべて拒否されます。
このプロパティは、次のフォームを取ります:
説明 | 例 |
フルPEM 証明書(例では省略されています) | -----BEGIN CERTIFICATE----- MIIChTCCAe4CAQAwDQYJKoZIhv......Qw== -----END CERTIFICATE----- |
証明書を保有するローカルファイルへのパス。 | C:\cert.cer |
公開鍵(例では省略されています) | -----BEGIN RSA PUBLIC KEY----- MIGfMA0GCSq......AQAB -----END RSA PUBLIC KEY----- |
MD5 Thumbprint (hex 値はスペースおよびコロン区切り) | ecadbdda5a1529c58a1e9e09828d70e4 |
SHA1 Thumbprint (hex 値はスペースおよびコロン区切り) | 34a929226ae0819f2ec14b4a3d904f801cbb150d |
これを指定しない場合は、マシンが信用するすべての証明書が受け入れられます。
すべての証明書の受け入れを示すには、'*'を使用します。セキュリティ上の理由から、これはお勧めできません。
このセクションでは、本プロバイダーの接続文字列で設定可能なLogging プロパティの全リストを提供します。
プロパティ | 説明 |
Verbosity | ログファイルの記述をどの程度の詳細さで記載するかを決定するverbosity レベル。 |
このセクションでは、本プロバイダーの接続文字列で設定可能なSchema プロパティの全リストを提供します。
プロパティ | 説明 |
BrowsableSchemas | このプロパティは、使用可能なスキーマのサブセットにレポートされるスキーマを制限します。例えば、BrowsableSchemas=SchemaA,SchemaB,SchemaC です。 |
このプロパティは、使用可能なスキーマのサブセットにレポートされるスキーマを制限します。例えば、BrowsableSchemas=SchemaA,SchemaB,SchemaC です。
string
""
スキーマをデータベースからリストすると、負荷がかかる可能性があります。接続文字列でスキーマのリストを提供すると、 パフォーマンスが向上します。
このセクションでは、本プロバイダーの接続文字列で設定可能なMiscellaneous プロパティの全リストを提供します。
プロパティ | 説明 |
IncludeInternalFields | Whether or not the CData ADO.NET Provider for Splunk should push the internal fields. These fields include: user, eventtype, etc. |
MaxRows | クエリで集計またはGROUP BY を使用しない場合に返される行数を制限します。これはLIMIT 句よりも優先されます。 |
MaxThreads | Specifies the number of concurrent requests. Only used when UseJobs is true. |
Pagesize | Splunk から返されるページあたりの結果の最大数。 |
PseudoColumns | このプロパティは、テーブルのカラムとして疑似カラムが含まれているかどうかを示します。 |
RowScanDepth | Set this property to control the number of rows scanned when TypeDetectionScheme is set to RowScan. |
Timeout | タイムアウトエラーがスローされ、処理をキャンセルするまでの秒数。 |
TypeDetectionScheme | Determines how to determine the data type of columns. |
UseJobs | Specifies whether to use the jobs endpoint instead of the export endpoint. If set to true, the maximum number of returned rows is configured Splunk's limit.conf file. |
Whether or not the CData ADO.NET Provider for Splunk should push the internal fields. These fields include: user, eventtype, etc.
bool
false
Whether or not the CData Cloud should push the internal fields. These fields include: user, eventtype, etc.
クエリで集計またはGROUP BY を使用しない場合に返される行数を制限します。これはLIMIT 句よりも優先されます。
int
-1
クエリで集計またはGROUP BY を使用しない場合に返される行数を制限します。これはLIMIT 句よりも優先されます。
Specifies the number of concurrent requests. Only used when UseJobs is true.
string
"5"
This property allows you to issue multiple requests simultaneously, thereby improving performance. Default value is 5 threads. Setting a higher value can result in OutOfMemory issues.
Splunk から返されるページあたりの結果の最大数。
int
10000
Pagesize プロパティは、Splunk から返されるページあたりの結果の最大数に影響を与えます。より大きい値を設定すると、1ページあたりの消費メモリが増える代わりに、パフォーマンスが向上する場合があります。
このプロパティは、テーブルのカラムとして疑似カラムが含まれているかどうかを示します。
string
""
Entity Framework ではテーブルカラムでない疑似カラムに値を設定できないため、この設定はEntity Framework で特に便利です。この接続設定の値は、"Table1=Column1, Table1=Column2, Table2=Column3" の形式です。"*=*" のように"*" 文字を使用して、すべてのテーブルとすべてのカラムを含めることができます。
Set this property to control the number of rows scanned when TypeDetectionScheme is set to RowScan.
string
"50"
Determines the number of rows used to determine the column data types.
Setting a high value may decrease performance. Setting a low value may prevent the data type from being determined properly, especially when there is null data.
タイムアウトエラーがスローされ、処理をキャンセルするまでの秒数。
int
60
Timeout が0に設定されている場合は、操作がタイムアウトしません。処理が正常に完了するか、エラー状態になるまで実行されます。
Timeout の有効期限が切れても処理が完了していない場合は、Cloud は例外をスローします。
Determines how to determine the data type of columns.
string
"RowScan"
None | Setting TypeDetectionScheme to None will return all columns as the string type. |
RowScan | Setting TypeDetectionScheme to RowScan will scan rows to heuristically determine the data type. The RowScanDepth determines the number of rows to be scanned. |
Specifies whether to use the jobs endpoint instead of the export endpoint. If set to true, the maximum number of returned rows is configured Splunk's limit.conf file.
bool
false
Whether to use the jobs endpoint instead of the export endpoint. While Jobs generally provide higher performance, the initial response time may be longer. If a Timeout error occurs, set the Timeout connection property to a higher value.