您的位置:首页 > 其它

Studying note of GCC-3.4.6 source (15)

2010-03-17 11:50 357 查看
2.2.3.1.5.2.2.6. FIELD_DECL of RECORD_TYPE
We have known thatFIELD_DECL is used to represent a non-static data member, and the detail is given in following
FIELD_DECL[2]
² These nodes represent non-static data members. The DECL_SIZE and DECL_ALIGN behave as for VAR_DECL nodes. The DECL_FIELD_BITPOS gives the first bit used for this field, as an INTEGER_CST. These values are indexed from zero, where zero indicates the first bit in the object.
If DECL_C_BIT_FIELD holds, this field is a bit-field.
2.2.3.1.5.2.2.6.1. Find Out Alignment of FIELD_DECL
Similarly fields in RECORD_TYPE (class or struct) are laid out by place_field. It worthes pointing out that, this function is not only invoked by layout_type, to do layout for compiler declaring structure; but also by layout_class_type to lay out user defined structure.
Alignment of field will affect the layout of structure greatly. On some machine, majorly RISC one, unaligned data will crash the machine. On other machines, though no crash incurred, the effciency is badly impacted. So unless explicitly pointed out, the compiler will generate code with data aligned. Of course, it is not for free. Code with data aligned is bigger than that with packed data. It will occupy more disk space and memory. Sometimes to ensure the data has the same size on all target platforms, or to force the data into a limit vacancy, it needs specify the alignment for the data or make it packed. In code of Linux kernel and drivers, such examples can be found. To control the layout of data, GCC provides attributes, such as: packed, align; directive, like: #pragma pack(N); and switch as: ms-bitfields/noms-bitfields.
And what makes the fields’ layout more complex is the bit field. MS layout permit adjacent same type bit fields to share the same memory unit as long as the total size is not overflow. While in GCC default layout, bit fields of different types still can share same memory unit. An important variation of bit field is the zero size bit field. For such declarations, they must be nameless. In short, zero size bit field inserted among other bit fields will cause bit field following being placed at boundary of its alignment no matter how many bits left in previous memory unit. And tailing zero size bit field will be ignored.
Following code snippet of place_field lays out fields as mentioned above.

place_field (continue)

860 /* Work out the known alignment so far. Note that A & (-A) is the
861 value of the least-significant bit in A that is one. */
862 if (!integer_zerop (rli->bitpos))
863 known_align = (tree_low_cst (rli->bitpos, 1)
864 & - tree_low_cst (rli->bitpos, 1));
865 else if (integer_zerop (rli->offset))
866 known_align = BIGGEST_ALIGNMENT;
867 else if (host_integerp (rli->offset, 1))
868 known_align = (BITS_PER_UNIT
869 * (tree_low_cst (rli->offset, 1)
870 & - tree_low_cst (rli->offset, 1)));
871 else
872 known_align = rli->offset_align;
873
874 desired_align = update_alignment_for_field (rli, field, known_align);

bitpos in record_layout_info holds current bit position. It is countered after offset. While offset records bytes of the part laid out, not including bits in bitpos. Later we will see that.
So beginning at line 862, if small type has followed large type as above example (condition at line 862), known_align is set as the nonzero least significant bit in bitpos. Then if it is first to layout the record (offset is size_zero_node, condition at line 865), known_align is set as BIGGEST_ALIGNMENT in the platform (128 bits for x86, that means for x86 system); otherwise, if offset can be processed efficiently (integer constant less than 4G), known_align is set with nonzero least significant bit in offset multiplied with BITS_PER_UNIT. When satisfies condition at line 871, it means seen field of non-constant size (for C++, it almost indicates a wrong declaration, and later we can see that for declaration of template, it will be laid out only at point of instantiation), thus known_align can’t be determined via the part laid out, just only assumes it the same as offset_align.
We have seen update_alignment_for_field above, it will return the final alignment (in bits) for the field. Remember here, as we find out in 2.2.3.3.1.5.3.5.2 Treatment for Bit Field Decl, for non-zero size bit field, if it is not replaced by certain integer mode in layout_decl, its DECL_ALIGN will be zero.
2.2.3.1.5.2.2.6.2. Alignment – unneccessary packing
If using switch –Wpacked, warn_packed below will be set as nonzero. In case of the packed attribute is uneccessary and inefficient, gives out warning. And nonzero DECL_PACKED indicates the field is bit packed (that is packed attribure in used).

place_field (continue)

876 if (warn_packed && DECL_PACKED (field))
877 {
878 if (known_align >= TYPE_ALIGN (type))
879 {
880 if (TYPE_ALIGN (type) > desired_align)
881 {
882 if (STRICT_ALIGNMENT)
883 warning ("%Jpacked attribute causes inefficient alignment "
884 "for '%D'", field, field);
885 else
886 warning ("%Jpacked attribute is unnecessary for '%D'",
887 field, field);
888 }
889 }
890 else
891 rli->packed_maybe_necessary = 1;
892 }

Now know_align basically is the alignment of the last bit of the previous field (note that the next bit is unoccupied), and desired_align is the expected alignment for the field. No doubt if known_align is larger than desired_align, the field can be placed from the next bit naturally even without packed attribute. For this unneccessary packed attribute, the compiler may generate inefficient code. For example, STRICT_ALIGNMENT at line 882 is set as 1 for platform on which move instruction will fail to work when given unaligned data. The compiler will avoid generating such code on those platforms, though doing that is unneccessary for the case.
2.2.3.1.5.2.2.6.3. Alignment – padding required
If desired_align larger than known_align, then there is gap between fields, it requires padding to round the next field to desired boundary. Below warn_padded is set by switch –Wpadded, if set, it will give out warning for padding.

place_field (continue)

894 /* Does this field automatically have alignment it needs by virtue
895 of the fields that precede it and the record's own alignment? */
896 if (known_align < desired_align)
897 {
898 /* No, we need to skip space before this field.
899 Bump the cumulative size to multiple of field alignment. */
900
901 if (warn_padded)
902 warning ("%Jpadding struct to align '%D'", field, field);
903
904 /* If the alignment is still within offset_align, just align
905 the bit position. */
906 if (desired_align < rli->offset_align)
907 rli->bitpos = round_up (rli->bitpos, desired_align);
908 else
909 {
910 /* First adjust OFFSET by the partial bits, then align. */
911 rli->offset
912 = size_binop (PLUS_EXPR, rli->offset,
913 convert (sizetype,
914 size_binop (CEIL_DIV_EXPR, rli->bitpos,
915 bitsize_unit_node)));
916 rli->bitpos = bitsize_zero_node;
917
918 rli->offset = round_up (rli->offset, desired_align / BITS_PER_UNIT);
919 }
920
921 if (! TREE_CONSTANT (rli->offset))
922 rli->offset_align = desired_align;
923
924 }

When desired_align larger than known_algin, but less than offset_align, for example, the f2 in below declaration:
struct temp {
int f1:17; // known_align: 15, offset_align: 128, bitpos: 17
short f2:10; // desired_align: 16
};
In the case, if it is GCC default layout, f1 and f2 can share the same memory of size int. Since bitpos records the bits not cross the alignment boundary and still not submitted, even includes the field, it may still within the boundary, thus bitpos is round to the boundary of desired_align (later we can see that for bit-field under MS layout, this adjustment will be ignored).
Then if desired_align larger than offset_align, which means we are within a layout of nonconstant size (in layout having all field of constant size, offset_align will keep 128 the biggest possible alignment). For the case offset_align records the alignment of the part laid out. Then condition at line 921 will be satisfied too, so offset_align keeps track of the alignment.
2.2.3.1.5.2.2.6.4. Alignment – bit field in GCC layout
As we have seen, on machine without bit-field operating instructions, at line 829, PCC_BITFIELD_TYPE_MATTERS must be defined as 1, so as to avoid bit-field cross alignment boundary (x86 is such machine). Though in GCC default layout, bit-fields of types of different size can share the same memory unit, with PCC_BITFIELD_TYPE_MATTERS of 1, it must ensure that no bit-field will break out.

place_field (continue)

926 /* Handle compatibility with PCC. Note that if the record has any
927 variable-sized fields, we need not worry about compatibility. */
928 #ifdef PCC_BITFIELD_TYPE_MATTERS
929 if (PCC_BITFIELD_TYPE_MATTERS
930 && ! (* targetm.ms_bitfield_layout_p) (rli->t)
931 && TREE_CODE (field) == FIELD_DECL
932 && type != error_mark_node
933 && DECL_BIT_FIELD (field)
934 && ! DECL_PACKED (field)
935 && maximum_field_alignment == 0
936 && ! integer_zerop (DECL_SIZE (field))
937 && host_integerp (DECL_SIZE (field), 1)
938 && host_integerp (rli->offset, 1)
939 && host_integerp (TYPE_SIZE (type), 1))
940 {
941 unsigned int type_align = TYPE_ALIGN (type);
942 tree dsize = DECL_SIZE (field);
943 HOST_WIDE_INT field_size = tree_low_cst (dsize, 1);
944 HOST_WIDE_INT offset = tree_low_cst (rli->offset, 0);
945 HOST_WIDE_INT bit_offset = tree_low_cst (rli->bitpos, 0);
946
947 #ifdef ADJUST_FIELD_ALIGN
948 if (! TYPE_USER_ALIGN (type))
949 type_align = ADJUST_FIELD_ALIGN (field, type_align);
950 #endif
951
952 /* A bit field may not span more units of alignment of its type
953 than its type itself. Advance to next boundary if necessary. */
954 if (excess_unit_span (offset, bit_offset, field_size, type_align, type))
955 rli->bitpos = round_up (rli->bitpos, type_align);
956
957 TYPE_USER_ALIGN (rli->t) |= TYPE_USER_ALIGN (type);
958 }
959 #endif
960
961 #ifdef BITFIELD_NBYTES_LIMITED

998 #endif

As seeing in section about in layout of union, macro ADJUST_FIELD_ALIGN invokes x86_field_alignment to adjust the type alignment according to the requirement upon the platform. If bit field crosses the alignment boundary of its type, bitpos will be updated to the boundary of type alignment – that means place the bit field in new memory unit.
excess_unit_span is used to find out the situation of crossing boundary.

801 static int
802 excess_unit_span (HOST_WIDE_INT byte_offset, HOST_WIDE_INT bit_offset, in stor-layout.c
803 HOST_WIDE_INT size, HOST_WIDE_INT align, tree type)
804 {
805 /* Note that the calculation of OFFSET might overflow; we calculate it so
806 that we still get the right result as long as ALIGN is a power of two. */
807 unsigned HOST_WIDE_INT offset = byte_offset * BITS_PER_UNIT + bit_offset;
808
809 offset = offset % align;
810 return ((offset + size + align - 1) / align
811 > ((unsigned HOST_WIDE_INT) tree_low_cst (TYPE_SIZE (type), 1)
812 / align));
813 }

Notice the(offset + size + align - 1) / align at line 810. It rounds up offset+size to closest times of align. And line 811 works out the ratio of size of the type to the type alignment (for example, in x86, double has ratio of 2). If the bit field declaration stride alignment boundary, bitpos is upgraded to next boundary.
Also find that line 957 indicates that, for GCC layout struct, one field aligned means the whole struct is regarded as aligned. At line 961, BITFIELD_NBYTES_LIMITED if defined means the bit-field even can’t cross the byte boundary unless it fully occupies the previous byte. It is not defined in x86 platform.
2.2.3.1.5.2.2.6.5. Alignment – bit field in MS layout
Refer to Treatment for Bit Field Decl for something about MS layout. Here the processing is alike. But it needs to consider the order of the bit-fields appearance, and becomes more complex. Below at line 1024, if rli->prev_field is not NULL, indicates previous field is a bit-field too. And condition at line 1032 promises that both current and previous bit-fields are not of zero size.

place_field (continue)

1000 /* See the docs for TARGET_MS_BITFIELD_LAYOUT_P for details.
1001 A subtlety:
1002 When a bit field is inserted into a packed record, the whole
1003 size of the underlying type is used by one or more same-size
1004 adjacent bitfields. (That is, if its long:3, 32 bits is
1005 used in the record, and any additional adjacent long bitfields are
1006 packed into the same chunk of 32 bits. However, if the size
1007 changes, a new field of that size is allocated.) In an unpacked
1008 record, this is the same as using alignment, but not equivalent
1009 when packing.
1010
1011 Note: for compatibility, we use the type size, not the type alignment
1012 to determine alignment, since that matches the documentation */
1013
1014 if ((* targetm.ms_bitfield_layout_p) (rli->t)
1015 && ((DECL_BIT_FIELD_TYPE (field) && ! DECL_PACKED (field))
1016 || (rli->prev_field && ! DECL_PACKED (rli->prev_field))))
1017 {
1018 /* At this point, either the prior or current are bitfields,
1019 (possibly both), and we're dealing with MS packing. */
1020 tree prev_saved = rli->prev_field;
1021
1022 /* Is the prior field a bitfield? If so, handle "runs" of same
1023 type size fields. */
1024 if (rli->prev_field /* necessarily a bitfield if it exists. */)
1025 {
1026 /* If both are bitfields, nonzero, and the same size, this is
1027 the middle of a run. Zero declared size fields are special
1028 and handled as "end of run". (Note: it's nonzero declared
1029 size, but equal type sizes!) (Since we know that both
1030 the current and previous fields are bitfields by the
1031 time we check it, DECL_SIZE must be present for both.) */
1032 if (DECL_BIT_FIELD_TYPE (field)
1033 && ! integer_zerop (DECL_SIZE (field))
1034 && ! integer_zerop (DECL_SIZE (rli->prev_field))
1035 && host_integerp (DECL_SIZE (rli->prev_field), 0)
1036 && host_integerp (TYPE_SIZE (type), 0)
1037 && simple_cst_equal (TYPE_SIZE (type),
1038 TYPE_SIZE (TREE_TYPE (rli->prev_field))))
1039 {
1040 /* We're in the middle of a run of equal type size fields; make
1041 sure we realign if we run out of bits. (Not decl size,
1042 type size!) */
1043 HOST_WIDE_INT bitsize = tree_low_cst (DECL_SIZE (field), 0);
1044
1045 if (rli->remaining_in_alignment < bitsize)
1046 {
1047 /* out of bits; bump up to next 'word'. */
1048 rli->offset = DECL_FIELD_OFFSET (rli->prev_field);
1049 rli->bitpos
1050 = size_binop (PLUS_EXPR, TYPE_SIZE (type),
1051 DECL_FIELD_BIT_OFFSET (rli->prev_field));
1052 rli->prev_field = field;
1053 rli->remaining_in_alignment
1054 = tree_low_cst (TYPE_SIZE (type), 0);
1055 }
1056
1057 rli->remaining_in_alignment -= bitsize;
1058 }

One of great difference between GCC layout and MS layout is that, for MS layout, adjacent bit fields share the same storage unit only they are of same type. remaining_in_alignment field in record_layout_info records the left bits in the shared storage. If the left bits are not enough for the bit field, it needs to bump up to next alignment boundary. Pay attention to bitpos at line 1048, in MS layout, its effect upon layout is replaced by remaining_in_alignment, but it still records the size of the uncommitted part during layout. There is tricky premise. Since processing bit-field in layout_decl, which will replace type having the closest size as the type of the bit-field, total size of 2 adjacent same type bit-fields must exceed the size of the type.

place_field (continue)

1059 else
1060 {
1061 /* End of a run: if leaving a run of bitfields of the same type
1062 size, we have to "use up" the rest of the bits of the type
1063 size.
1064
1065 Compute the new position as the sum of the size for the prior
1066 type and where we first started working on that type.
1067 Note: since the beginning of the field was aligned then
1068 of course the end will be too. No round needed. */
1069
1070 if (!integer_zerop (DECL_SIZE (rli->prev_field)))
1071 {
1072 tree type_size = TYPE_SIZE (TREE_TYPE (rli->prev_field));
1073
1074 rli->bitpos
1075 = size_binop (PLUS_EXPR, type_size,
1076 DECL_FIELD_BIT_OFFSET (rli->prev_field));
1077 }
1078 else
1079 /* We "use up" size zero fields; the code below should behave
1080 as if the prior field was not a bitfield. */
1081 prev_saved = NULL;
1082
1083 /* Cause a new bitfield to be captured, either this time (if
1084 currently a bitfield) or next time we see one. */
1085 if (!DECL_BIT_FIELD_TYPE(field)
1086 || integer_zerop (DECL_SIZE (field)))
1087 rli->prev_field = NULL;
1088 }
1089
1090 normalize_rli (rli);
1091 }

If enters the block at line 1059, and satisfies the condition at line 1070, it indicates either, 1) 0 size bit-filed following nonzero size bit-field; or 2) current bit-field is also nonzero size but not of same type as previous one; or 3) current field is not bit-field. In MS layout, current field should be placed in the new memory unit. Thus expression at line 1074 is not very correct. In v4.3.0 it now becomes:
rli->bitpos=size_binop (PLUS_EXPR, rli->bitpos, bitsize_int (rli->remaining_in_alignment));
Then if previous bit-field of 0 size, code at line 1081 will be executed.
Next at line 1085, if current field is not bit-field, or it is 0 in size; the next field should be regarded as following non-bit-field.
As we may adjust bitpos above, it is possible bitpos larger than alignment, it needs normalize bitpos and offset. It is the functionality of normalize_rli.

659 void
660 normalize_rli (record_layout_info rli) in stor-layout.c
661 {
662 normalize_offset (&rli->offset, &rli->bitpos, rli->offset_align);
663 }

In normalized form, bitpos must be within alignment boundary.

614 void
615 normalize_offset (tree *poffset, tree *pbitpos, unsigned int off_align) in stor-layout.c
616 {
617 /* If the bit position is now larger than it should be, adjust it
618 downwards. */
619 if (compare_tree_int (*pbitpos, off_align) >= 0)
620 {
621 tree extra_aligns = size_binop (FLOOR_DIV_EXPR, *pbitpos,
622 bitsize_int (off_align));
623
624 *poffset
625 = size_binop (PLUS_EXPR, *poffset,
626 size_binop (MULT_EXPR, convert (sizetype, extra_aligns),
627 size_int (off_align / BITS_PER_UNIT)));
628
629 *pbitpos
630 = size_binop (FLOOR_MOD_EXPR, *pbitpos, bitsize_int (off_align));
631 }
632 }

Below, if current field is, either 1) not bit-field, or bit-field and 2) previous is bit-field, and of different type, or 3) previous is not bit-field and current has nonzero size; condition at line 1105 is satisfied. Notice that for non-bit-field, its DECL_SIZE equates to its type’s TYPE_SIZE, thus for it, remaining_in_alignment at line 1121 is 0. While for bit-field, it is the remained bits from TYPE_SIZE.

place_field (continue)

1093 /* If we're starting a new run of same size type bitfields
1094 (or a run of non-bitfields), set up the "first of the run"
1095 fields.
1096
1097 That is, if the current field is not a bitfield, or if there
1098 was a prior bitfield the type sizes differ, or if there wasn't
1099 a prior bitfield the size of the current field is nonzero.
1100
1101 Note: we must be sure to test ONLY the type size if there was
1102 a prior bitfield and ONLY for the current field being zero if
1103 there wasn't. */
1104
1105 if (!DECL_BIT_FIELD_TYPE (field)
1106 || ( prev_saved != NULL
1107 ? !simple_cst_equal (TYPE_SIZE (type),
1108 TYPE_SIZE (TREE_TYPE (prev_saved)))
1109 : ! integer_zerop (DECL_SIZE (field)) ))
1110 {
1111 /* Never smaller than a byte for compatibility. */
1112 unsigned int type_align = BITS_PER_UNIT;
1113
1114 /* (When not a bitfield), we could be seeing a flex array (with
1115 no DECL_SIZE). Since we won't be using remaining_in_alignment
1116 until we see a bitfield (and come by here again) we just skip
1117 calculating it. */
1118 if (DECL_SIZE (field) != NULL
1119 && host_integerp (TYPE_SIZE (TREE_TYPE (field)), 0)
1120 && host_integerp (DECL_SIZE (field), 0))
1121 rli->remaining_in_alignment
1122 = tree_low_cst (TYPE_SIZE (TREE_TYPE (field)), 0)
1123 - tree_low_cst (DECL_SIZE (field), 0);
1124
1125 /* Now align (conventionally) for the new type. */
1126 if (!DECL_PACKED(field))
1127 type_align = MAX(TYPE_ALIGN (type), type_align);
1128
1129 if (prev_saved
1130 && DECL_BIT_FIELD_TYPE (prev_saved)
1131 /* If the previous bit-field is zero-sized, we've already
1132 accounted for its alignment needs (or ignored it, if
1133 appropriate) while placing it. */
1134 && ! integer_zerop (DECL_SIZE (prev_saved)))
1135 type_align = MAX (type_align,
1136 TYPE_ALIGN (TREE_TYPE (prev_saved)));
1137
1138 if (maximum_field_alignment != 0)
1139 type_align = MIN (type_align, maximum_field_alignment);
1140
1141 rli->bitpos = round_up (rli->bitpos, type_align);
1142
1143 /* If we really aligned, don't allow subsequent bitfields
1144 to undo that. */
1145 rli->prev_field = NULL;
1146 }
1147 }

See that prev_saved at line 1129 is set as 0 when previous is bit-field and 0 size; otherwise it holds the previous bit-field. For current bit-field, it has been included in remaining_in_alignment; but it should be placed at alignment boundary, then at line 1141, bitpos takes this adjustment, which is also the same for non-bit-field.
2.2.3.1.5.2.2.6.6. Update record_layout_info
Now we have collected the alignment and offset of the start bit (via offset and bitpos) for the field, it is time to put this information into FIELD_DECL and updates record_layout_info.

place_field (continue)

1149 /* Offset so far becomes the position of this field after normalizing. */
1150 normalize_rli (rli);
1151 DECL_FIELD_OFFSET (field) = rli->offset;
1152 DECL_FIELD_BIT_OFFSET (field) = rli->bitpos;
1153 SET_DECL_OFFSET_ALIGN (field, rli->offset_align);
1154
1155 /* If this field ended up more aligned than we thought it would be (we
1156 approximate this by seeing if its position changed), lay out the field
1157 again; perhaps we can use an integral mode for it now. */
1158 if (!integer_zerop (DECL_FIELD_OFFSET (field)))
1159 actual_align = (tree_low_cst (DECL_FIELD_BIT_OFFSET (field), 1)
1160 & - tree_low_cst (DECL_FIELD_BIT_OFFSET (field), 1));
1161 else if (integer_zerop (DECL_FIELD_OFFSET (field)))
1162 actual_align = BIGGEST_ALIGNMENT;
1163 else if (host_integerp (DECL_FIELD_OFFSET (field), 1))
1164 actual_align = (BITS_PER_UNIT
1165 * (tree_low_cst (DECL_FIELD_OFFSET (field), 1)
1166 & - tree_low_cst (DECL_FIELD_OFFSET (field), 1)));
1167 else
1168 actual_align = DECL_OFFSET_ALIGN (field);
1169
1170 if (known_align != actual_align)
1171 layout_decl (field, actual_align);
1172
1173 /* Only the MS bitfields use this. */
1174 if (rli->prev_field == NULL && DECL_BIT_FIELD_TYPE(field))
1175 rli->prev_field = field;
1176
1177 /* Now add size of this field to the size of the record. If the size is
1178 not constant, treat the field as being a multiple of bytes and just
1179 adjust the offset, resetting the bit position. Otherwise, apportion the
1180 size amongst the bit position and offset. First handle the case of an
1181 unspecified size, which can happen when we have an invalid nested struct
1182 definition, such as struct j { struct j { int i; } }. The error message
1183 is printed in finish_struct. */
1184 if (DECL_SIZE (field) == 0)
1185 /* Do nothing. */;
1186 else if (TREE_CODE (DECL_SIZE_UNIT (field)) != INTEGER_CST
1187 || TREE_CONSTANT_OVERFLOW (DECL_SIZE_UNIT (field)))
1188 {
1189 rli->offset
1190 = size_binop (PLUS_EXPR, rli->offset,
1191 convert (sizetype,
1192 size_binop (CEIL_DIV_EXPR, rli->bitpos,
1193 bitsize_unit_node)));
1194 rli->offset
1195 = size_binop (PLUS_EXPR, rli->offset, DECL_SIZE_UNIT (field));
1196 rli->bitpos = bitsize_zero_node;
1197 rli->offset_align = MIN (rli->offset_align, desired_align);
1198 }
1199 else
1200 {
1201 rli->bitpos = size_binop (PLUS_EXPR, rli->bitpos, DECL_SIZE (field));
1202 normalize_rli (rli);
1203 }
1204 }

Notice that the order of assigning actual_align is the same as that of known_align at line 862. If both are unequal, calls layout_decl to re-layout the field again; for bit-field may be this time an integer mode can be found.
Since the field has been placed, it is time to adjust parameters in rli. And in layout of nonconstant size, offset_align must be synchronized.
2.2.3.1.5.2.3. Finialize the layout
After finishing layout of all fields, at line 1748 in layout_type, lang_adjust_rli is hook that front-end can set for providing special treatment. In current version, it is useless (and in v4.3.0, it is remvoed). Following, it is going to finialize the size, alignment, etc of the structure.

1444 void
1445 finish_record_layout (record_layout_info rli, int free_p) in stor-layout.c
1446 {
1447 /* Compute the final size. */
1448 finalize_record_size (rli);
1449
1450 /* Compute the TYPE_MODE for the record. */
1451 compute_record_mode (rli->t);
1452
1453 /* Perform any last tweaks to the TYPE_SIZE, etc. */
1454 finalize_type_size (rli->t);
1455
1456 /* Lay out any static members. This is done now because their type
1457 may use the record's type. */
1458 while (rli->pending_statics)
1459 {
1460 layout_decl (TREE_VALUE (rli->pending_statics), 0);
1461 rli->pending_statics = TREE_CHAIN (rli->pending_statics);
1462 }
1463
1464 /* Clean up. */
1465 if (free_p)
1466 free (rli);
1467 }
2.2.3.1.5.2.3.1. Determine Size of the Type
Function finalize_record_size computes the final size of aggregate type out of given record_layout_info.

1210 static void
1211 finalize_record_size (record_layout_info rli) in stor-layout.c
1212 {
1213 tree unpadded_size, unpadded_size_unit;
1214
1215 /* Now we want just byte and bit offsets, so set the offset alignment
1216 to be a byte and then normalize. */
1217 rli->offset_align = BITS_PER_UNIT;
1218 normalize_rli (rli);
1219
1220 /* Determine the desired alignment. */
1221 #ifdef ROUND_TYPE_ALIGN
1222 TYPE_ALIGN (rli->t) = ROUND_TYPE_ALIGN (rli->t, TYPE_ALIGN (rli->t),
1223 rli->record_align);
1224 #else
1225 TYPE_ALIGN (rli->t) = MAX (TYPE_ALIGN (rli->t), rli->record_align);
1226 #endif
1227
1228 /* Compute the size so far. Be sure to allow for extra bits in the
1229 size in bytes. We have guaranteed above that it will be no more
1230 than a single byte. */
1231 unpadded_size = rli_size_so_far (rli);
1232 unpadded_size_unit = rli_size_unit_so_far (rli);
1233 if (! integer_zerop (rli->bitpos))
1234 unpadded_size_unit
1235 = size_binop (PLUS_EXPR, unpadded_size_unit, size_one_node);
1236
1237 /* Round the size up to be a multiple of the required alignment. */
1238 TYPE_SIZE (rli->t) = round_up (unpadded_size, TYPE_ALIGN (rli->t));
1239 TYPE_SIZE_UNIT (rli->t) = round_up (unpadded_size_unit,
1240 TYPE_ALIGN (rli->t) / BITS_PER_UNIT);
1241
1242 if (warn_padded && TREE_CONSTANT (unpadded_size)
1243 && simple_cst_equal (unpadded_size, TYPE_SIZE (rli->t)) == 0)
1244 warning ("padding struct size to alignment boundary");
1245
1246 if (warn_packed && TREE_CODE (rli->t) == RECORD_TYPE
1247 && TYPE_PACKED (rli->t) && ! rli->packed_maybe_necessary
1248 && TREE_CONSTANT (unpadded_size))
1249 {
1250 tree unpacked_size;
1251
1252 #ifdef ROUND_TYPE_ALIGN
1253 rli->unpacked_align
1254 = ROUND_TYPE_ALIGN (rli->t, TYPE_ALIGN (rli->t), rli->unpacked_align);
1255 #else
1256 rli->unpacked_align = MAX (TYPE_ALIGN (rli->t), rli->unpacked_align);
1257 #endif
1258
1259 unpacked_size = round_up (TYPE_SIZE (rli->t), rli->unpacked_align);
1260 if (simple_cst_equal (unpacked_size, TYPE_SIZE (rli->t)))
1261 {
1262 TYPE_PACKED (rli->t) = 0;
1263
1264 if (TYPE_NAME (rli->t))
1265 {
1266 const char *name;
1267
1268 if (TREE_CODE (TYPE_NAME (rli->t)) == IDENTIFIER_NODE)
1269 name = IDENTIFIER_POINTER (TYPE_NAME (rli->t));
1270 else
1271 name = IDENTIFIER_POINTER (DECL_NAME (TYPE_NAME (rli->t)));
1272
1273 if (STRICT_ALIGNMENT)
1274 warning ("packed attribute causes inefficient alignment for `%s'", name);
1275 else
1276 warning ("packed attribute is unnecessary for `%s'", name);
1277 }
1278 else
1279 {
1280 if (STRICT_ALIGNMENT)
1281 warning ("packed attribute causes inefficient alignment");
1282 else
1283 warning ("packed attribute is unnecessary");
1284 }
1285 }
1286 }
1287 }

Line 1217 and 1218 adjust the final size into bytes and the remaining. Then at line 1221, ROUND_TYPE_ALIGN is not defined for x86. At line 1231, 1232, it finds out how many bits and bytes the struct actually occupies now.

675 tree
676 rli_size_so_far (record_layout_info rli) in stor-layout.c
677 {
678 return bit_from_pos (rli->offset, rli->bitpos);
679 }

582 tree
583 bit_from_pos (tree offset, tree bitpos) in stor-layout.c
584 {
585 return size_binop (PLUS_EXPR, bitpos,
586 size_binop (MULT_EXPR, convert (bitsizetype, offset),
587 bitsize_unit_node));
588 }

667 tree
668 rli_size_unit_so_far (record_layout_info rli)
669 {
670 return byte_from_pos (rli->offset, rli->bitpos);
671 }

590 tree
591 byte_from_pos (tree offset, tree bitpos) in stor-layout.c
592 {
593 return size_binop (PLUS_EXPR, offset,
594 convert (sizetype,
595 size_binop (TRUNC_DIV_EXPR, bitpos,
596 bitsize_unit_node)));
597 }

Notice that in byte_from_pos, the number of bytes doesn’t include remaining bits. So at line 1233, if bitpos is nonzero, extra byte will be added
At line 1225, record_align records the biggest alignment among fields; it is used as the alignment for the structure too. And the size of the structure must be times of the alignment.

307 tree
308 round_up (tree value, int divisor) in stor-layout.c
309 {
310 tree arg = size_int_type (divisor, TREE_TYPE (value));
311
312 return size_binop (MULT_EXPR, size_binop (CEIL_DIV_EXPR, value, arg), arg);
313 }

In the rest code, if there is padding at tail and –Wpadded in used, gives out warning. While for packed layout, if the result is the same as that of unpacked, and if –Wpacked in effect, gives out the warning too.
2.2.3.1.5.2.3.2. Determine Type Mode
As data of BLKmode won’t be placed into register (in other words, it only can be accessed indirectly), but for RECORD_TYPE under certain size, it does can fit into register; so it needs to determine appropriate machine mode for these RECORD_TYPE, then the back-end can generate more efficient code.

1291 void
1292 compute_record_mode (tree type) in stor-layout.c
1293 {
1294 tree field;
1295 enum machine_mode mode = VOIDmode;
1296
1297 /* Most RECORD_TYPEs have BLKmode, so we start off assuming that.
1298 However, if possible, we use a mode that fits in a register
1299 instead, in order to allow for better optimization down the
1300 line. */
1301 TYPE_MODE (type) = BLKmode;
1302
1303 if (! host_integerp (TYPE_SIZE (type), 1))
1304 return;
1305
1306 /* A record which has any BLKmode members must itself be
1307 BLKmode; it can't go in a register. Unless the member is
1308 BLKmode only because it isn't aligned. */
1309 for (field = TYPE_FIELDS (type); field; field = TREE_CHAIN (field))
1310 {
1311 if (TREE_CODE (field) != FIELD_DECL)
1312 continue;
1313
1314 if (TREE_CODE (TREE_TYPE (field)) == ERROR_MARK
1315 || (TYPE_MODE (TREE_TYPE (field)) == BLKmode
1316 && ! TYPE_NO_FORCE_BLK (TREE_TYPE (field))
1317 && !(TYPE_SIZE (TREE_TYPE (field)) != 0
1318 && integer_zerop (TYPE_SIZE (TREE_TYPE (field)))))
1319 || ! host_integerp (bit_position (field), 1)
1320 || DECL_SIZE (field) == 0
1321 || ! host_integerp (DECL_SIZE (field), 1))
1322 return;
1323
1324 /* If this field is the whole struct, remember its mode so
1325 that, say, we can put a double in a class into a DF
1326 register instead of forcing it to live in the stack. */
1327 if (simple_cst_equal (TYPE_SIZE (type), DECL_SIZE (field)))
1328 mode = DECL_MODE (field);
1329
1330 #ifdef MEMBER_TYPE_FORCES_BLK
1331 /* With some targets, eg. c4x, it is sub-optimal
1332 to access an aligned BLKmode structure as a scalar. */
1333
1334 if (MEMBER_TYPE_FORCES_BLK (field, mode))
1335 return;
1336 #endif /* MEMBER_TYPE_FORCES_BLK */
1337 }
1338
1339 /* If we only have one real field; use its mode. This only applies to
1340 RECORD_TYPE. This does not apply to unions. */
1341 if (TREE_CODE (type) == RECORD_TYPE && mode != VOIDmode)
1342 TYPE_MODE (type) = mode;
1343 else
1344 TYPE_MODE (type) = mode_for_size_tree (TYPE_SIZE (type), MODE_INT, 1);
1345
1346 /* If structure's known alignment is less than what the scalar
1347 mode would need, and it matters, then stick with BLKmode. */
1348 if (TYPE_MODE (type) != BLKmode
1349 && STRICT_ALIGNMENT
1350 && ! (TYPE_ALIGN (type) >= BIGGEST_ALIGNMENT
1351 || TYPE_ALIGN (type) >= GET_MODE_ALIGNMENT (TYPE_MODE (type))))
1352 {
1353 /* If this is the only reason this type is BLKmode, then
1354 don't force containing types to be BLKmode. */
1355 TYPE_NO_FORCE_BLK (type) = 1;
1356 TYPE_MODE (type) = BLKmode;
1357 }
1358 }

Next, based on finalize_record_size, finalize_type_size does some small tweaks, and applies the finalized parameters upon type variants. And for type having non-constant size, it also will invoke variable_size to create SAVE_EXPR to prevent compiler from doing extra evaluation.
At last, it processes all pending static variable declarations.
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: