no data recrived from serverlet是什么是什么意思

SQL Server Data Access Components
SQL Server Data Access Components (SDAC) is a library of components
that provides native connectivity to SQL Server from Delphi, C++Builder,
Lazarus (and Free Pascal) for Windows (both 32-bit and 64-bit)
and Mac OS X. SDAC-based applications connect to SQL Server directly through OLE
DB, which is a native SQL Server interface. SDAC is designed to help programmers
develop faster and cleaner SQL Server database applications.
SDAC, a high-performance and feature-rich SQL Server connectivity solution, is a
complete replacement for standard SQL Server connectivity solutions and presents
an efficient native alternative to the Borland Database Engine (BDE) and standard
dbExpress driver for access to SQL Server.
Native Connectivity to SQL Server
SDAC-based DB applications are easy to deploy, do not require installation of other
data provider layers (such as BDE and ODBC), and that's why they can work faster
than the ones based on standard Delphi data connectivity solutions. Moreover, SDAC
provides working with SQL Server not only through OLE DB, but through SQL Native
Client as well.
Wide Coverage of SQL Server Features
SDAC supports a wide range of SQL Server specific features, such as Transparent
Application Failover, Notification, Queing and reliable messaging, SQL Server Compact
Edition, User-defined Types (including HierarchyID, Geography, Geometry), Table-Valued
Parameters, Filestream, and others.
Developing in Delphi, C++Builder, and Lazarus for Windows and Mac OS X
SDAC is a cross-platform solution for developing applications using various IDEs:
RAD Studio, Delphi, C++Builder, Lazarus (and FPC) on Windows, Mac OS X, iOS and
Android for both x86 and x64 platforms. SDAC also provides support for the FireMonkey
application development platform, which allows you to develop visually spectacular
high-performance desktop and mobile native applications.
High Development Productivity with SDAC
We provide various GUI tools that will increase your productivity: dbMonitor allows
monitoring activity of your DB applications, Dataset Manager simplifies DataSet
and DB controls tweaking, and others.
Key Features
Direct Mode
Allows your application to work with SQL Server directly via TCP/IP without involving
SQL Server Client, thus significantly facilitating deployment and configuration
of your applications.
Mobile Development
Develompent for iOS and Android mobile devices using SDAC becomes still easier,
as SDAC allows your mobile applications to work with SSQL Server database as simply as
desktop applications do.
DB Compatibility
SQL Server
(including Express edition), SQL Server 2000
(including MSDE), SQL Server 7, SQL Server Compact 4.0\3.5\3.1, SQL Azure
Data Type Mapping
If you want to make custom correspondence between SQL Server and Delphi data types,
you can use a simple and flexible Data Type Mapping engine provided by SDAC.
IDE Compatibility
Our product is compatible with the latest IDE versions: Embarcadero RAD Studio XE8,
Delphi XE8, C++Builder XE8, Lazarus (and FPC). It is also compatible with the previous
IDE versions since Delphi 6 and C++Builder 6.
Development Platforms
Now you can develop not only VCL-based applications in Delphi and LCL-based ones
in Lazarus, but also use the newest&FireMonkey&application development
Performance
All our components and libraries are designed to help you write high-performance,
lightweight data access layers, therefore they use advanced data access algorithms
and techniques of optimization.
Monitoring
Use our freeware
tool to monitor
and analyze all the DB calls made by your application using SQL Server data access
components. dbMonitor performs per&component tracing of SQL statement&execution,&commits,
rollbacks, etc.
to get instant
support from experienced professionals, fast and detailed responses, user engagement
and interaction, frequent builds with bug fixes, and much more.
Related products
You might be also interested in:
ORM solution for Delphi:
SQL Server management tools:
Delphi data access components for:
Universal Delphi data access components for any databases:
SQL Server data access in other technologies:
Copyright & 1998 - 2015 . All rights reserved.MySQL 5.1 Reference Manual :: 13.2.6 LOAD DATA INFILE Syntax
MySQL 5.1 Manual
Section Navigation &&&&&[]
13.2.6 LOAD DATA INFILE Syntax
LOAD DATA [LOW_PRIORITY | CONCURRENT] [LOCAL] INFILE 'file_name'
[REPLACE | IGNORE]
INTO TABLE tbl_name
[CHARACTER SET charset_name]
[{FIELDS | COLUMNS}
[TERMINATED BY 'string']
[[OPTIONALLY] ENCLOSED BY 'char']
[ESCAPED BY 'char']
[STARTING BY 'string']
[TERMINATED BY 'string']
[IGNORE number LINES]
[(col_name_or_user_var,...)]
[SET col_name = expr,...]
statement reads rows from a text file into a
table at a very high speed.
is the complement of
. (See .) To write
data from a table to a file, use
. To read the file back into a table, use
. The syntax of the FIELDS and
LINES clauses is the same for both statements.
Both clauses are optional, but FIELDS must
precede LINES if both are specified.
You can also load data files by using the
it operates by sending a
statement to the server. The
option causes
to read data files from the client
host. You can specify the
option to get
better performance over slow networks if the client and server
support the compressed protocol. See
For more information about the efficiency of
and speeding up
The file name must be given as a literal string. On Windows,
specify backslashes in path names as forward slashes or doubled
backslashes. As of MySQL 5.1.6, the
variable controls the interpretation of the file name.
The server uses the character set indicated by the
variable to interpret the information in the file. SET
NAMES and the setting of
affect interpretation of input. If the contents of the input file
use a character set that differs from the default, it is usually
preferable to specify the character set of the file by using the
CHARACTER SET clause, which is available as of
MySQL 5.1.17. A character set of binary
specifies “no conversion.”
interprets all fields in the file as having the
same character set, regardless of the data types of the columns
into which field values are loaded. For proper interpretation of
file contents, you must ensure that it was written with the
correct character set. For example, if you write a data file with
or by issuing a
statement in , be sure
to use a --default-character-set option so that
output is written in the character set to be used when the file is
loaded with .
It is not possible to load data files that use the
ucs2 character set.
If you use LOW_PRIORITY, execution of the
statement is delayed
until no other clients are reading from the table. This affects
only storage engines that use only table-level locking (such as
MyISAM, MEMORY, and
If you specify CONCURRENT with a
MyISAM table that satisfies the condition for
concurrent inserts (that is, it contains no free blocks in the
middle), other threads can retrieve data from the table while
is executing. This option
affects the performance of
a bit, even if no other thread is using the table
at the same time.
With row-based replication, CONCURRENT is
replicated regardless of MySQL version. With statement-based
replication CONCURRENT is not replicated prior
to MySQL 5.1.43 (see Bug #34628). For more information, see
Prior to MySQL 5.1.23,
performed very poorly when importing into partitioned tables.
The statement now uses buffering to
however, the buffer uses 130KB memory per partition to achieve
this. (Bug #26527)
The LOCAL keyword affects expected location of
the file and error handling, as described later.
LOCAL works only if your server and your client
both have been configured to permit it. For example, if
was started with
LOCAL does not work. See
The LOCAL keyword affects where the file is
expected to be found:
If LOCAL is specified, the file is read by
the client program on the client host and sent to the server.
The file can be given as a full path name to specify its exact
location. If given as a relative path name, the name is
interpreted relative to the directory in which the client
program was started.
When using LOCAL with
, a copy of the file
is created in the server's temporary directory. This is
not the directory determined by the value
, but rather
the operating system's temporary directory, and is not
configurable in the MySQL Server. (Typically the system
temporary directory is /tmp on Linux
systems and C:\WINDOWS\TEMP on Windows.)
Lack of sufficient space for the copy in this directory can
statement to fail.
If LOCAL is not specified, the file must be
located on the server host and is read directly by the server.
The server uses the following rules to locate the file:
If the file name is an absolute path name, the server uses
it as given.
If the file name is a relative path name with one or more
leading components, the server searches for the file
relative to the server's data directory.
If a file name with no leading components is given, the
server looks for the file in the database directory of the
default database.
In the non-LOCAL case, these rules mean that a
file named as ./myfile.txt is read from the
server's data directory, whereas the file named as
myfile.txt is read from the database
directory of the default database. For example, if
db1 is the default database, the following
statement reads the file
data.txt from the database directory for
db1, even though the statement explicitly loads
the file into a table in the db2 database:
LOAD DATA INFILE 'data.txt' INTO TABLE db2.my_
A regression in MySQL 5.1.40 caused the database referenced in a
fully qualified table name to be ignored by
when using replication
with either STATEMENT or
MIXED as the b this
could lead to problems if the table was not in the current
database. As a workaround, you can specify the correct database
statement prior to
executing . If
necessary, you can reset the default database with a second
statement following the
statement.
This issue was fixed in MySQL 5.1.41. (Bug #48297)
For security reasons, when reading text files located on the
server, the files must either reside in the database directory or
be readable by all. Also, to use
on server files, you must have the
privilege. See
non-LOCAL load operations, if the
system variable
is set to a nonempty directory name, the file to be loaded must be
located in that directory.
Using LOCAL is a bit slower than letting the
server access the files directly, because the contents of the file
must be sent over the connection by the client to the server. On
the other hand, you do not need the
privilege to load local files.
LOCAL also affects error handling:
With , data-interpretation and duplicate-key errors
terminate the operation.
With , data-interpretation and duplicate-key
errors become warnings and the operation continues because the
server has no way to stop transmission of the file in the
middle of the operation. For duplicate-key errors, this is the
same as if IGNORE is specified.
IGNORE is explained further later in this
The REPLACE and IGNORE
keywords control handling of input rows that duplicate existing
rows on unique key values:
If you specify REPLACE, input rows replace
existing rows. In other words, rows that have the same value
for a primary key or unique index as an existing row. See
If you specify IGNORE, rows that duplicate
an existing row on a unique key value are discarded.
If you do not specify either option, the behavior depends on
whether the LOCAL keyword is specified.
Without LOCAL, an error occurs when a
duplicate key value is found, and the rest of the text file is
ignored. With LOCAL, the default behavior
is the same as if IGNORE this
is because the server has no way to stop transmission of the
file in the middle of the operation.
To ignore foreign key constraints during the load operation, issue
a SET foreign_key_checks = 0 statement before
executing .
If you use
on an empty MyISAM table, all
nonunique indexes are created in a separate batch (as for
). Normally, this makes
much faster when you have many indexes. In some
extreme cases, you can create the indexes even faster by turning
them off with ALTER TABLE ... DISABLE KEYS
before loading the file into the table and using ALTER
TABLE ... ENABLE KEYS to re-create the indexes after
loading the file. See .
For both the
statements, the syntax of the
FIELDS and LINES clauses is
the same. Both clauses are optional, but FIELDS
must precede LINES if both are specified.
If you specify a FIELDS clause, each of its
subclauses (TERMINATED BY,
[OPTIONALLY] ENCLOSED BY, and ESCAPED
BY) is also optional, except that you must specify at
least one of them.
If you specify no FIELDS or
LINES clause, the defaults are the same as if
you had written this:
FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\'
LINES TERMINATED BY '\n' STARTING BY ''
(Backslash is the MySQL escape character within strings in SQL
statements, so to specify a literal backslash, you must specify
two backslashes for the value to be interpreted as a single
backslash. The escape sequences '\t' and
'\n' specify tab and newline characters,
respectively.)
In other words, the defaults cause
to act as follows when reading input:
Look for line boundaries at newlines.
Do not skip over any line prefix.
Break lines into fields at tabs.
Do not expect fields to be enclosed within any quoting
characters.
Interpret characters preceded by the escape character
“\” as escape sequences. For
example, “\t”,
“\n”, and
“\\” signify tab, newline, and
backslash, respectively. See the discussion of FIELDS
ESCAPED BY later for the full list of escape
sequences.
Conversely, the defaults cause
to act as follows when writing output:
Write tabs between fields.
Do not enclose fields within any quoting characters.
Use “\” to escape instances of
tab, newline, or “\” that
occur within field values.
Write newlines at the ends of lines.
If you have generated the text file on a Windows system, you
might have to use LINES TERMINATED BY '\r\n'
to read the file properly, because Windows programs typically
use two characters as a line terminator. Some programs, such as
WordPad, might use \r as a
line terminator when writing files. To read such files, use
LINES TERMINATED BY '\r'.
If all the lines you want to read in have a common prefix that you
want to ignore, you can use LINES STARTING BY
'prefix_string' to skip over
the prefix, and anything before it. If a line
does not include the prefix, the entire line is skipped. Suppose
that you issue the following statement:
LOAD DATA INFILE '/tmp/test.txt' INTO TABLE test
FIELDS TERMINATED BY ','
LINES STARTING BY 'xxx';
If the data file looks like this:
xxx"abc",1
something xxx"def",2
The resulting rows will be ("abc",1) and
("def",2). The third row in the file is skipped
because it does not contain the prefix.
The IGNORE number
LINES option can be used to ignore lines at the start of
the file. For example, you can use IGNORE 1
LINES to skip over an initial header line containing
column names:
LOAD DATA INFILE '/tmp/test.txt' INTO TABLE test IGNORE 1 LINES;
When you use
in tandem with
to write data from a database into a file and
then read the file back into the database later, the field- and
line-handling options for both statements must match. Otherwise,
will not interpret the contents of the file
properly. Suppose that you use
to write a file with fields delimited by commas:
SELECT * INTO OUTFILE 'data.txt'
FIELDS TERMINATED BY ','
FROM table2;
To read the comma-delimited file back in, the correct statement
LOAD DATA INFILE 'data.txt' INTO TABLE table2
FIELDS TERMINATED BY ',';
If instead you tried to read in the file with the statement shown
following, it wouldn't work because it instructs
to look for tabs between fields:
LOAD DATA INFILE 'data.txt' INTO TABLE table2
FIELDS TERMINATED BY '\t';
The likely result is that each input line would be interpreted as
a single field.
can be used to read files obtained from external
sources. For example, many programs can export data in
comma-separated values (CSV) format, such that lines have fields
separated by commas and enclosed within double quotation marks,
with an initial line of column names. If the lines in such a file
are terminated by carriage return/newline pairs, the statement
shown here illustrates the field- and line-handling options you
would use to load the file:
LOAD DATA INFILE 'data.txt' INTO TABLE tbl_name
FIELDS TERMINATED BY ',' ENCLOSED BY '"'
LINES TERMINATED BY '\r\n'
IGNORE 1 LINES;
If the input values are not necessarily enclosed within quotation
marks, use OPTIONALLY before the
ENCLOSED BY keywords.
Any of the field- or line-handling options can specify an empty
string (''). If not empty, the FIELDS
[OPTIONALLY] ENCLOSED BY and FIELDS ESCAPED
BY values must be a single character. The
FIELDS TERMINATED BY, LINES STARTING
BY, and LINES TERMINATED BY values
can be more than one character. For example, to write lines that
are terminated by carriage return/linefeed pairs, or to read a
file containing such lines, specify a LINES TERMINATED BY
'\r\n' clause.
To read a file containing jokes that are separated by lines
consisting of %%, you can do this
CREATE TABLE jokes
(a INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
joke TEXT NOT NULL);
LOAD DATA INFILE '/tmp/jokes.txt' INTO TABLE jokes
FIELDS TERMINATED BY ''
LINES TERMINATED BY '\n%%\n' (joke);
FIELDS [OPTIONALLY] ENCLOSED BY controls
quoting of fields. For output
(), if you omit the word
OPTIONALLY, all fields are enclosed by the
ENCLOSED BY character. An example of such
output (using a comma as the field delimiter) is shown here:
"1","a string","100.20"
"2","a string containing a , comma","102.20"
"3","a string containing a \" quote","102.20"
"4","a string containing a \", quote and comma","102.20"
If you specify OPTIONALLY, the
ENCLOSED BY character is used only to enclose
values from columns that have a string data type (such as
1,"a string",100.20
2,"a string containing a , comma",102.20
3,"a string containing a \" quote",102.20
4,"a string containing a \", quote and comma",102.20
Occurrences of the ENCLOSED BY character within
a field value are escaped by prefixing them with the
ESCAPED BY character. Also note that if you
specify an empty ESCAPED BY value, it is
possible to inadvertently generate output that cannot be read
properly by . For example, the preceding output just shown
would appear as follows if the escape character is empty. Observe
that the second field in the fourth line contains a comma
following the quote, which (erroneously) appears to terminate the
1,"a string",100.20
2,"a string containing a , comma",102.20
3,"a string containing a " quote",102.20
4,"a string containing a ", quote and comma",102.20
For input, the ENCLOSED BY character, if
present, is stripped from the ends of field values. (This is true
regardless of whether OPTIONALLY
OPTIONALLY has no effect on input
interpretation.) Occurrences of the ENCLOSED BY
character preceded by the ESCAPED BY character
are interpreted as part of the current field value.
If the field begins with the ENCLOSED BY
character, instances of that character are recognized as
terminating a field value only if followed by the field or line
TERMINATED BY sequence. To avoid ambiguity,
occurrences of the ENCLOSED BY character within
a field value can be doubled and are interpreted as a single
instance of the character. For example, if ENCLOSED BY
'"' is specified, quotation marks are handled as shown
"The ""BIG"" boss"
-& The "BIG" boss
The "BIG" boss
-& The "BIG" boss
The ""BIG"" boss
-& The ""BIG"" boss
FIELDS ESCAPED BY controls how to read or write
special characters:
For input, if the FIELDS ESCAPED BY
character is not empty, occurrences of that character are
stripped and the following character is taken literally as
part of a field value. Some two-character sequences that are
exceptions, where the first character is the escape character.
These sequences are shown in the following table (using
“\” for the escape character).
The rules for NULL handling are described
later in this section.
For more information about
“\”-escape syntax, see
If the FIELDS ESCAPED BY character is
empty, escape-sequence interpretation does not occur.
For output, if the FIELDS ESCAPED BY
character is not empty, it is used to prefix the following
characters on output:
The FIELDS ESCAPED BY character
The FIELDS [OPTIONALLY] ENCLOSED BY
The first character of the FIELDS TERMINATED
BY and LINES TERMINATED BY
ASCII 0 (what is actually written
following the escape character is ASCII
“0”, not a zero-valued
If the FIELDS ESCAPED BY character is
empty, no characters are escaped and NULL
is output as NULL, not
\N. It is probably not a good idea to
specify an empty escape character, particularly if field
values in your data contain any of the characters in the list
just given.
In certain cases, field- and line-handling options interact:
If LINES TERMINATED BY is an empty string
and FIELDS TERMINATED BY is nonempty, lines
are also terminated with FIELDS TERMINATED
If the FIELDS TERMINATED BY and
FIELDS ENCLOSED BY values are both empty
(''), a fixed-row (nondelimited) format is
used. With fixed-row format, no delimiters are used between
fields (but you can still have a line terminator). Instead,
column values are read and written using a field width wide
enough to hold all values in the field. For
, the field widths are 4,
6, 8, 11, and 20, respectively, no matter what the declared
display width is.
LINES TERMINATED BY is still used to
separate lines. If a line does not contain all fields, the
rest of the columns are set to their default values. If you do
not have a line terminator, you should set this to
''. In this case, the text file must
contain all fields for each row.
Fixed-row format also affects handling of
NULL values, as described later.
Fixed-size format does not work if you are using a multibyte
character set.
Handling of NULL values varies according to the
FIELDS and LINES options in
For the default FIELDS and
LINES values, NULL is
written as a field value of \N for output,
and a field value of \N is read as
NULL for input (assuming that the
ESCAPED BY character is
If FIELDS ENCLOSED BY is not empty, a field
containing the literal word NULL as its
value is read as a NULL value. This differs
from the word NULL enclosed within
FIELDS ENCLOSED BY characters, which is
read as the string 'NULL'.
If FIELDS ESCAPED BY is empty,
NULL is written as the word
With fixed-row format (which is used when FIELDS
TERMINATED BY and FIELDS ENCLOSED
BY are both empty), NULL is
written as an empty string. This causes both
NULL values and empty strings in the table
to be indistinguishable when written to the file because both
are written as empty strings. If you need to be able to tell
the two apart when reading the file back in, you should not
use fixed-row format.
An attempt to load NULL into a NOT
NULL column causes assignment of the implicit default
value for the column's data type and a warning, or an error in
strict SQL mode. Implicit default values are discussed in
Some cases are not supported by
Fixed-size rows (FIELDS TERMINATED BY and
FIELDS ENCLOSED BY both empty) and
If you specify one separator that is the same as or a prefix
of another,
cannot interpret the input properly. For
example, the following FIELDS clause would
cause problems:
FIELDS TERMINATED BY '"' ENCLOSED BY '"'
If FIELDS ESCAPED BY is empty, a field
value that contains an occurrence of FIELDS ENCLOSED
BY or LINES TERMINATED BY
followed by the FIELDS TERMINATED BY value
to stop reading a field or line too early.
This happens because
cannot properly determine where the field or
line value ends.
The following example loads all columns of the
persondata table:
LOAD DATA INFILE 'persondata.txt' INTO TABLE
By default, when no column list is provided at the end of the
statement, input lines are expected to contain a
field for each table column. If you want to load only some of a
table's columns, specify a column list:
LOAD DATA INFILE 'persondata.txt' INTO TABLE persondata (col1,col2,...);
You must also specify a column list if the order of the fields in
the input file differs from the order of the columns in the table.
Otherwise, MySQL cannot tell how to match input fields with table
The column list can contain either column names or user variables.
With user variables, the SET clause enables you
to perform transformations on their values before assigning the
result to columns.
User variables in the SET clause can be used in
several ways. The following example uses the first input column
directly for the value of t1.column1, and
assigns the second input column to a user variable that is
subjected to a division operation before being used for the value
of t1.column2:
LOAD DATA INFILE 'file.txt'
INTO TABLE t1
(column1, @var1)
SET column2 = @var1/100;
The SET clause can be used to supply values not
derived from the input file. The following statement sets
column3 to the current date and time:
LOAD DATA INFILE 'file.txt'
INTO TABLE t1
(column1, column2)
SET column3 = CURRENT_TIMESTAMP;
You can also discard an input value by assigning it to a user
variable and not assigning the variable to a table column:
LOAD DATA INFILE 'file.txt'
INTO TABLE t1
(column1, @dummy, column2, @dummy, column3);
Use of the column/variable list and SET clause
is subject to the following restrictions:
Assignments in the SET clause should have
only column names on the left hand side of assignment
operators.
You can use subqueries in the right hand side of
SET assignments. A subquery that returns a
value to be assigned to a column may be a scalar subquery
only. Also, you cannot use a subquery to select from the table
that is being loaded.
Lines ignored by an IGNORE clause are not
processed for the column/variable list or
SET clause.
User variables cannot be used when loading data with fixed-row
format because user variables do not have a display width.
When processing an input line,
splits it into fields and uses the values according
to the column/variable list and the SET clause,
if they are present. Then the resulting row is inserted into the
table. If there are BEFORE INSERT or
AFTER INSERT triggers for the table, they are
activated before or after inserting the row, respectively.
If an input line has too many fields, the extra fields are ignored
and the number of warnings is incremented.
If an input line has too few fields, the table columns for which
input fields are missing are set to their default values. Default
value assignment is described in
An empty field value is interpreted different from a missing
For string types, the column is set to the empty string.
For numeric types, the column is set to 0.
For date and time types, the column is set to the appropriate
“zero” value for the type. See
These are the same values that result if you assign an empty
string explicitly to a string, numeric, or date or time type
explicitly in an
statement.
Treatment of empty or incorrect field values differs from that
just described if the SQL mode is set to a restrictive value. For
example, if
, conversion
of an empty value or a value such as 'x' for a
numeric column results in an error, not conversion to 0. (With
LOCAL, warnings occur rather than errors, even
with a restrictive
value, because the server has no way to stop transmission of the
file in the middle of the operation.)
columns are set to the
current date and time only if there is a NULL
value for the column (that is, \N) and the
column is not declared to permit NULL values,
default value is the current timestamp and it is omitted from the
field list when a field list is specified.
regards all input as strings, so you cannot use
numeric values for
columns the way you can with
statements. All
values must be specified as
values cannot be loaded using
binary notation (for example, b'011010'). To
work around this, specify the values as regular integers and use
the SET clause to convert them so that MySQL
performs a numeric type conversion and loads them into the
column properly:
shell& cat /tmp/bit_test.txt
shell& mysql test
mysql& LOAD DATA INFILE '/tmp/bit_test.txt'
-& INTO TABLE bit_test (@var1) SET b = CAST(@var1 AS UNSIGNED);
Query OK, 2 rows affected (0.00 sec)
Records: 2
Deleted: 0
Skipped: 0
Warnings: 0
mysql& SELECT BIN(b+0) FROM bit_
+----------+
| bin(b+0) |
+----------+
+----------+
2 rows in set (0.00 sec)
On Unix, if you need
read from a pipe, you can use the following technique (the example
loads a listing of the / directory into the
table db1.t1):
mkfifo /mysql/data/db1/ls.dat
chmod 666 /mysql/data/db1/ls.dat
find / -ls & /mysql/data/db1/ls.dat &
mysql -e "LOAD DATA INFILE 'ls.dat' INTO TABLE t1" db1
Here you must run the command that generates the data to be loaded
commands either on separate
terminals, or run the data generation process in the background
(as shown in the preceding example). If you do not do this, the
pipe will block until data is read by the
statement finishes, it returns an information
string in the following format:
Records: 1
Deleted: 0
Skipped: 0
Warnings: 0
Warnings occur under the same circumstances as when values are
inserted using the
(see ), except that
also generates warnings when there are too few or
too many fields in the input row.
You can use
list of the first
warnings as information about what went wrong. See
If you are using the C API, you can get information about the
statement by calling the
function. See
For partitioned tables using storage engines that employ table
locks, such as , any locks
caused by LOAD DATA perform locks on all
partitions of the table. This does not apply to tables using
storage engines which employ row-level locking, such as
. For more information, see

我要回帖

更多关于 serverlet是什么 的文章

 

随机推荐