]> git.proxmox.com Git - ceph.git/blame - ceph/src/boost/libs/spirit/doc/lex/lexer_api.qbk
bump version to 12.2.2-pve1
[ceph.git] / ceph / src / boost / libs / spirit / doc / lex / lexer_api.qbk
CommitLineData
7c673cae
FG
1[/==============================================================================
2 Copyright (C) 2001-2011 Joel de Guzman
3 Copyright (C) 2001-2011 Hartmut Kaiser
4
5 Distributed under the Boost Software License, Version 1.0. (See accompanying
6 file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
7===============================================================================/]
8[section:lexer_api Lexer API]
9
10[heading Description]
11
12The library provides a couple of free functions to make using the lexer a snap.
13These functions have three forms. The first form, `tokenize`, simplifies the
14usage of a stand alone lexer (without parsing). The second form,
15`tokenize_and_parse`, combines a lexer step with parsing on
16the token level (without a skipper). The third, `tokenize_and_phrase_parse`,
17works on the token level as well, but additionally employs a skip parser. The
18latter two versions can take in attributes by reference that will hold the
19parsed values on a successful parse.
20
21[heading Header]
22
23 // forwards to <boost/spirit/home/lex/tokenize_and_parse.hpp>
24 #include <boost/spirit/include/lex_tokenize_and_parse.hpp>
25
26For variadic attributes:
27
28 // forwards to <boost/spirit/home/lex/tokenize_and_parse_attr.hpp>
29 #include <boost/spirit/include/lex_tokenize_and_parse_attr.hpp>
30
31The variadic attributes version of the API allows one or more
32attributes to be passed into the API functions. The functions taking two
33or more attributes are usable when the parser expression is a
34__qi_sequence__ only. In this case each of the
35attributes passed have to match the corresponding part of the sequence.
36
37Also, see __include_structure__.
38
39[heading Namespace]
40
41[table
42 [[Name]]
43 [[`boost::spirit::lex::tokenize` ]]
44 [[`boost::spirit::lex::tokenize_and_parse` ]]
45 [[`boost::spirit::lex::tokenize_and_phrase_parse` ]]
46 [[`boost::spirit::qi::skip_flag::postskip` ]]
47 [[`boost::spirit::qi::skip_flag::dont_postskip` ]]
48]
49
50[heading Synopsis]
51
52The `tokenize` function is one of the main lexer API functions. It
53simplifies using a lexer to tokenize a given input sequence. It's main
54purpose is to use the lexer to tokenize all the input.
55
56Both functions take a pair of iterators spanning the underlying input
57stream to scan, the lexer object (built from the token definitions),
58and an (optional) functor being called for each of the generated tokens. If no
59function object `f` is given, the generated tokens will be discarded.
60
61The functions return `true` if the scanning of the input succeeded (the
62given input sequence has been successfully matched by the given
63token definitions).
64
65The argument `f` is expected to be a function (callable) object taking a single
66argument of the token type and returning a bool, indicating whether
67the tokenization should be canceled. If it returns `false` the function
68`tokenize` will return `false` as well.
69
70The `initial_state` argument forces lexing to start with the given lexer state.
71If this is omitted lexing starts in the `"INITIAL"` state.
72
73 template <typename Iterator, typename Lexer>
74 inline bool
75 tokenize(
76 Iterator& first
77 , Iterator last
78 , Lexer const& lex
79 , typename Lexer::char_type const* initial_state = 0);
80
81 template <typename Iterator, typename Lexer, typename F>
82 inline bool
83 tokenize(
84 Iterator& first
85 , Iterator last
86 , Lexer const& lex
87 , F f
88 , typename Lexer::char_type const* initial_state = 0);
89
90The `tokenize_and_parse` function is one of the main lexer API
91functions. It simplifies using a lexer as the underlying token source
92while parsing a given input sequence.
93
94The functions take a pair of iterators spanning the underlying input
95stream to parse, the lexer object (built from the token definitions)
96and a parser object (built from the parser grammar definition). Additionally
97they may take the attributes for the parser step.
98
99The function returns `true` if the parsing succeeded (the given input
100sequence has been successfully matched by the given grammar).
101
102 template <typename Iterator, typename Lexer, typename ParserExpr>
103 inline bool
104 tokenize_and_parse(
105 Iterator& first
106 , Iterator last
107 , Lexer const& lex
108 , ParserExpr const& expr)
109
110 template <typename Iterator, typename Lexer, typename ParserExpr
111 , typename Attr1, typename Attr2, ..., typename AttrN>
112 inline bool
113 tokenize_and_parse(
114 Iterator& first
115 , Iterator last
116 , Lexer const& lex
117 , ParserExpr const& expr
118 , Attr1 const& attr1, Attr2 const& attr2, ..., AttrN const& attrN);
119
120The functions `tokenize_and_phrase_parse` take a pair of iterators spanning
121the underlying input stream to parse, the lexer object (built from the token
122definitions) and a parser object (built from the parser grammar definition).
123The additional skipper parameter will be used as the skip parser during
124the parsing process. Additionally they may take the attributes for the parser
125step.
126
127The function returns `true` if the parsing succeeded (the given input
128sequence has been successfully matched by the given grammar).
129
130 template <typename Iterator, typename Lexer, typename ParserExpr
131 , typename Skipper>
132 inline bool
133 tokenize_and_phrase_parse(
134 Iterator& first
135 , Iterator last
136 , Lexer const& lex
137 , ParserExpr const& expr
138 , Skipper const& skipper
139 , BOOST_SCOPED_ENUM(skip_flag) post_skip = skip_flag::postskip);
140
141 template <typename Iterator, typename Lexer, typename ParserExpr
142 , typename Skipper, typename Attr1, typename Attr2, ..., typename AttrN>
143 inline bool
144 tokenize_and_phrase_parse(
145 Iterator& first
146 , Iterator last
147 , Lexer const& lex
148 , ParserExpr const& expr
149 , Skipper const& skipper
150 , Attr1 const& attr1, Attr2 const& attr2, ..., AttrN const& attrN);
151
152 template <typename Iterator, typename Lexer, typename ParserExpr
153 , typename Skipper, typename Attr1, typename Attr2, ..., typename AttrN>
154 inline bool
155 tokenize_and_phrase_parse(
156 Iterator& first
157 , Iterator last
158 , Lexer const& lex
159 , ParserExpr const& expr
160 , Skipper const& skipper
161 , BOOST_SCOPED_ENUM(skip_flag) post_skip
162 , Attr1 const& attr1, Attr2 const& attr2, ..., AttrN const& attrN);
163
164The maximum number of supported arguments is limited by the preprocessor
165constant `SPIRIT_ARGUMENTS_LIMIT`. This constant defaults to the value defined
166by the preprocessor constant `PHOENIX_LIMIT` (which in turn defaults to `10`).
167
168[note The variadic function with two or more attributes internally combine
169 references to all passed attributes into a `fusion::vector` and forward
170 this as a combined attribute to the corresponding one attribute function.]
171
172The `tokenize_and_phrase_parse` functions not taking an explicit `skip_flag`
173as one of their arguments invoke the passed skipper after a successful match
174of the parser expression. This can be inhibited by using the other versions of
175that function while passing `skip_flag::dont_postskip` to the corresponding
176argument.
177
178[heading Template parameters]
179
180[table
181 [[Parameter] [Description]]
182 [[`Iterator`] [__fwditer__ pointing to the underlying input sequence to parse.]]
183 [[`Lexer`] [A lexer (token definition) object.]]
184 [[`F`] [A function object called for each generated token.]]
185 [[`ParserExpr`] [An expression that can be converted to a Qi parser.]]
186 [[`Skipper`] [Parser used to skip white spaces.]]
187 [[`Attr1`, `Attr2`, ..., `AttrN`][One or more attributes.]]
188]
189
190[endsect]